Pieces on Machine Consciousness

Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, “Bot Phenomenology,” in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in a discussion about the potential nature of nonbiological phenomenology. Machine consciousness. What robots might feel.

I led them through with questions like “What do you take phenomenology to mean?” and “what do you think of the possibility of a machine having a phenomenology of its own?” We discussed different definitions of “language” and “communication” and “body,” and unfortunately didn’t have a conversation about how certain definitions of those terms mean that what would be considered language between cats would be a cat communicating via signalling to humans.

It was a really great conversation and the Live Stream video for this is here, and linked below (for now, but it may go away at some point, to be replaced by a static youtube link; when I know that that’s happened, I will update links and embeds, here).

Read the rest of Nonhuman and Nonbiological Phenomenology at A Future Worth Thinking About

Additionally,  I have another quote about the philosophical and sociopolitical implications of machine intelligence in this extremely well-written piece by K.G. Orphanides at WIRED UK. From the Article:

Williams, a specialist in the ethics and philosophy of nonhuman consciousness, argues that such systems need to be built differently to avoid a a corporate race for the best threat analysis and response algorithms which [will be] likely to [see the world as] a “zero-sum game” where only one side wins. “This is not a perspective suited to devise, for instance, a thriving flourishing life for everything on this planet, or a minimisation of violence and warfare,” he adds.

Much more about this, from many others, at the link.

Until Next Time.

Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES

[“Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues” was originally published in Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.
The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US]

[Image of an eye in a light-skinned face; the iris and pupil have been replaced with a green neutral-faced emoji; by Stu Jones via CJ Sorg on Flickr / Creative Commons]

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Continue reading

A Discussion on Daoism and Machine Consciousness

Over at AFutureWorthThinkingAbout, there is the audio and text for a talk for the  about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male:

My starting positions, here, are that, 1) in order to do the work correctly, we literally must refrain from resting in abstraction, where, by definition, the kinds of models that don’t seek to actually engage with the people in question from within their own contexts, before deciding to do something “for someone’s own good,” represent egregious failure states. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you’d have done unto you and what they’d have you do unto them.” I know it doesn’t have the same snap as “do unto others,” but it’s the only way we’ll make it through.

[An image of a traditional Yin-Yang carved in a silver ring]

2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in no way meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use “machine consciousness” to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in “AGI.”

For the sake of this, as I’ve related elsewhere, I (perhaps somewhat paradoxically) think the term “artificial intelligence” is problematic. Anything that does the things we want machine minds to do is genuinely intelligent, not “artificially” so, where we use “artificial” to mean “fake,” or “contrived.” To be clear, I’m specifically problematizing the “natural/technological” divide that gives us “art vs artifice,” for reasons previously outlined here.

The  overarching project of training a machine learning program and eventual AI will require engagement with religious texts (a very preliminary take on this has been taken up by Rose Eveleth at the Flash Forward Podcast), but also a boarder engagement with discernment and decision-making. Even beginning to program or code for this will require us to think very differently about the project than has thus far been in evidence.

Read or listen to the rest of A Discussion on Daoism and Machine Consciousness at A Future Worth Thinking About

On Adaptable Modes of Thought

This piece originally appeared at A Future Worth Thinking About

-Human Dignity-

The other day I got a CFP for “the future of human dignity,” and it set me down a path thinking.

We’re worried about shit like mythical robots that can somehow simultaneously enslave us and steal the shitty low paying jobs we none of us want to but all of us have to have so we can pay off the debt we accrued to get the education we were told would be necessary to get those jobs, while other folks starve and die of exposure in a world that is just chock full of food and houses…

About shit like how we can better regulate the conflated monster of human trafficking and every kind of sex work, when human beings are doing the best they can to direct their own lives—to live and feed themselves and their kids on their own terms—without being enslaved and exploited…

About, fundamentally, how to make reactionary laws to “protect” the dignity of those of us whose situations the vast majority of us have not worked to fully appreciate or understand, while we all just struggle to not get: shot by those who claim to protect us, willfully misdiagnosed by those who claim to heal us, or generally oppressed by the system that’s supposed to enrich and uplift us…

…but no, we want to talk about the future of human dignity?

Louisiana’s drowning, Missouri’s on literal fire, Baltimore is almost certainly under some ancient mummy-based curse placed upon it by the angry ghost of Edgar Allan Poe, and that’s just in the One Country.

Motherfucker, human dignity ain’t got a Past or a Present, so how about let’s reckon with that before we wax poetically philosophical about its Future.

I mean, it’s great that folks at Google are finally starting to realise that making sure the composition of their teams represents a variety of lived experiences is a good thing. But now the questions are, 1) do they understand that it’s not about tokenism, but about being sure that we are truly incorporating those who were previously least likely to be incorporated, and 2) what are we going to do to not only specifically and actively work to change that, but also PUBLICIZE THAT WE NEED TO?

These are the kinds of things I mean when I say, “I’m not so much scared of/worried about AI as I am about the humans who create and teach them.”

There’s a recent opinion piece at the Washington Post, titled “Why perceived inequality leads people to resist innovation,”. I read something like that and I think… Right, but… that perception is a shared one based on real impacts of tech in the lives of many people; impacts which are (get this) drastically unequal. We’re talking about implications across communities, nations, and the world, at an intersection with a tech industry that has a really quite disgusting history of “disruptively innovating” people right out of their homes and lives without having ever asked the affected parties about what they, y’know, NEED.

So yeah. There’s a fear of inequality in the application of technological innovation… Because there’s a history of inequality in the application of technological innovation!

This isn’t some “well aren’t all the disciplines equally at fault here,” pseudo-Kumbaya false equivalence bullshit. There are neoliberal underpinnings in the tech industry that are basically there to fuck people over. “What the market will bear” is code for, “How much can we screw people before there’s backlash? Okay so screw them exactly that much.” This model has no regard for the preexisting systemic inequalities between our communities, and even less for the idea that it (the model) will both replicate and iterate upon those inequalities. That’s what needs to be addressed, here.

Check out this piece over at Killscreen. We’ve talked about this before—about how we’re constantly being sold that we’re aiming for a post-work economy, where the internet of things and self-driving cars and the sharing economy will free us all from the mundaneness of “jobs,” all while we’re simultaneously being asked to ignore that our trajectory is gonna take us straight through and possibly land us square in a post-Worker economy, first.

Never mind that we’re still gonna expect those ex-workers to (somehow) continue to pay into capitalism, all the while.

If, for instance, either Uber’s plan for a driverless fleet or the subsequent backlash from their stable—i mean “drivers” are shocking to you, then you have managed to successfully ignore this trajectory.


Disciplines like psychology and sociology and history and philosophy? They’re already grappling with the fears of the ones most likely to suffer said inequality, and they’re quite clear on the fact that, the ones who have so often been fucked over?

Yeah, their fears are valid.

You want to use technology to disrupt the status quo in a way that actually helps people? Here’s one example of how you do it: “Creator of chatbot that beat 160,000 parking fines now tackling homelessness.”

Until then, let’s talk about constructing a world in which we address the needs of those marginalised. Let’s talk about magick and safe spaces.

-Squaring the Circle-

Speaking of CFPs, several weeks back, I got one for a special issue of Philosophy and Technology on “Logic As Technology,” and it made me realise that Analytic Philosophy somehow hasn’t yet understood and internalised that its wholly invented language is a technology

…and then that realisation made me realise that Analytic Philosophy hasn’t understood that language as a whole is a Technology.

And this is something we’ve talked about before, right? Language as a technology, but not just any technology. It’s the foundational technology. It’s the technology on which all others are based. It’s the most efficient way we have to cram thoughts into the minds of others, share concept structures, and make the world appear and behave the way we want it to. The more languages we know, right?

We can string two or more knowns together in just the right way, and create a third, fourth, fifth known. We can create new things in the world, wholecloth, as a result of new words we make up or old words we deploy in new ways. We can make each other think and feel and believe and do things, with words, tone, stance, knowing looks. And this is because Language is, at a fundamental level, the oldest magic we have.


Scene from the INJECTION issue #3, by Warren Ellis, Declan Shalvey, and Jordie Bellaire. ©Warren Ellis & Declan Shalvey.

Lewis Carroll tells us that whatever we tell each other three times is true, and many have noted that lies travel far faster than the truth, and at the crux of these truisms—the pivot point, where the power and leverage are—is Politics.

This week, much hay is being made is being made about the University of Chicago’s letter decrying Safe Spaces and Trigger Warnings. Ignoring for the moment that every definition of “safe space” and “trigger warning” put forward by their opponents tends to be a straw man of those terms, let’s just make an attempt to understand where they come from, and how we can situate them.

Trauma counseling and trauma studies are the epitome of where safe space and trigger warnings come from, and for the latter, that definition is damn near axiomatic. Triggers are about trauma. But safe space language has far more granularity than that. Microggressions are certainly damaging, but they aren’t on the same level as acute traumas. Where acute traumas are like gun shots or bomb blasts (and may indeed be those actual things), societal micragressions are more like a slow constant siege. But we still need the language of a safe spaces to discuss them—said space is something like a bunker in which to regroup, reassess, and plan for what comes next.

Now it is important to remember that there is a very big difference between “safe” and “comfortable,” and when laying out the idea of safe spaces, every social scientist I know takes great care to outline that difference.

Education is about stretching ourselves, growing and changing, and that is discomfort almost by definition. I let my students know that they will be uncomfortable in my class, because I will be challenging every assumption they have. But discomfort does not mean I’m going to countenance racism or transphobia or any other kind of bigotry.

Because the world is not a safe space, but WE CAN MAKE IT SAFER for people who are microagressed against, marginalised, assaulted, and killed for their lived identities, by letting them know not only how to work to change it, but SHOWING them through our example.

Like we’ve said, before: No, the world’s not safe, kind, or fair, and with that attitude it never will be.

So here’s the thing, and we’ll lay it out point-by-point:

A Safe Space is any realm that is marked out for the nonjudgmental expression of thoughts and feelings, in the interest of honestly assessing and working through them.

Safe Space” can mean many things, from “Safe FROM Racist/Sexist/Homophobic/Transphobic/Fatphobic/Ableist Microagressions” to “safe FOR the thorough exploration of our biases and preconceptions.” The terms of the safe space are negotiated at the marking out of them.

The terms are mutually agreed-upon by all parties. The only imposition would be, to be open to the process of expressing and thinking through oppressive conceptual structures.

Everything else—such as whether to address those structures as they exist in ourselves (internalised oppressions), in others (aggressions, micro- or regular sized), or both and their intersection—is negotiable.

The marking out of a Safe Space performs the necessary function, at the necessary time, defined via the particular arrangement of stakeholders, mindset, and need.

And, as researcher John Flowers notes, anyone who’s ever been in a Dojo has been in a Safe Space.

From a Religious Studies perspective, defining a safe space is essentially the same process as that of marking out a RITUAL space. For students or practitioners of any form of Magic[k], think Drawing a Circle, or Calling the Corners.

Some may balk at the analogy to the occult, thinking that it cheapens something important about our discourse, but look: Here’s another way we know that magick is alive and well in our everyday lives:

If they could, a not-insignificant number of US Republicans would overturn the Affordable Care Act and rally behind a Republican-crafted replacement (RCR). However, because the ACA has done so very much good for so many, it’s likely that the only RCR that would have enough support to pass would be one that looked almost identical to the ACA. The only material difference would be that it didn’t have President Obama’s name on it—which is to say, it wouldn’t be associated with him, anymore, since his name isn’t actually on the ACA.

The only reason people think of the ACA as “Obamacare” is because US Republicans worked so hard to make that name stick, and now that it has been widely considered a triumph, they’ve been working just as hard to get his name away from it. And if they did mange to achieve that, it would only be true due to some arcane ritual bullshit. And yet…

If they managed it, it would be touted as a “Crushing defeat for President Obama’s signature legislation.” It would have lasting impacts on the world. People would be emboldened, others defeated, and new laws, social rules, and behaviours would be undertaken, all because someone’s name got removed from a thing in just the right way.

And that’s Magick.

The work we do in thinking about the future sometimes requires us to think about things from what stuffy assholes in the 19th century liked to call a “primitive” perspective. They believed in a kind of evolutionary anthropological categorization of human belief, one in which all societies move from “primitive” beliefs like magic through moderate belief in religion, all the way to sainted perfect rational science. In the contemporary Religious Studies, this evolutionary model is widely understood to be bullshit.

We still believe in magic, we just call it different things. The concept structures of sympathy and contagion are still at play, here, the ritual formulae of word and tone and emotion and gesture all still work when you call them political strategy and marketing and branding. They’re all still ritual constructions designed to make you think and behave differently. They’re all still causing spooky action at a distance. They’re still magic.

The world still moves on communicated concept structure. It still turns on the dissemination of the will. If I can make you perceive what I want you to perceive, believe what I want you to believe, move how I want you to move, then you’ll remake the world, for me, if I get it right. And I know that you want to get it right. So you have to be willing to understand that this is magic.

It’s not rationalism.

It’s not scientism.

It’s not as simple as psychology or poll numbers or fear or hatred or aspirational belief causing people to vote against their interests. It’s not that simple at all. It’s as complicated as all of them, together, each part resonating with the others to create a vastly complex whole. It’s a living, breathing thing that makes us think not just “this is a thing we think” but “this is what we are.” And if you can do that—if you can accept the tools and the principles of magic, deploy the symbolic resonance of dreamlogic and ritual—then you might be able to pull this off.

But, in the West, part of us will always balk at the idea that the Rational won’t win out. That the clearer, more logical thought doesn’t always save us. But you have to remember: Logic is a technology. Logic is a tool. Logic is the application of one specific kind of thinking, over and over again, showing a kind of result that we convinced one another we preferred to other processes. It’s not inscribed on the atoms of the universe. It is one kind of language. And it may not be the one most appropriate for the task at hand.

Put it this way: When you’re in Zimbabwe, will you default to speaking Chinese? Of course not. So why would we default to mere Rationalism, when we’re clearly in a land that speaks a different dialect?

We need spells and amulets, charms and warded spaces; we need sorcerers of the people to heal and undo the hexes being woven around us all.

-Curious Alchemy-

Ultimately, the rigidity of our thinking, and our inability to adapt has lead us to be surprised by too much that we wanted to believe could never have come to pass. We want to call all of this “unprecedented,” when the truth of the matter is, we carved this precedent out every day for hundreds of years, and the ability to think in weird paths is what will define those who thrive.

If we are going to do the work of creating a world in which we understand what’s going on, and can do the work to attend to it, then we need to think about magic.

If you liked this article, consider dropping something into the  Technoccult & A Future Worth Thinking About Tip Jar

On Magick, Technology, Philosophy, and Pop-Culture

Those are my main areas of interest. It may not sound like a whole lot, but you’d honestly be surprised at the kind of mileage you can get out of recombining them and applying them as lenses through which to look at the world.

Hello. I’m Damien Williams, known by many of you as Wolven. Klint did a pretty fantastic job of introducing me, last time, so I’m not going to rehash any of that. What I want to do, right now, is to point you at a few places where you can get a decent sense for the kinds of plans I have for what we’re going to be doing, around here.

First, there is, of course, the Mindful Cyborgs interview I did with Klint.

Then there’s my presentation from Magick.Codes.

My Master’s Thesis.

My article “Fairytales Of Slavery: Societal Distinctions, Technoshamanism, and Nonhuman Personhood.

And this atemporal conversation between myself and M1K3y, over at the Cosmic Anthropology Podcast.

What I want to be doing here is taking the time to engage in conversations with multiple thinkers about philosophical, religious, political, and occult perspectives on our science fictional present, and posting the audio, video, or transcriptions of either of those. I want to do this with some major frequency, but that requires the time and space to do so.

Which brings me to my next point: A discussion of an overarching framework of where A Future Worth Thinking About and Technoccult are headed. “Protected: Thinking About the Worth of the Future: Logistics.”

To be frank, it’s a money conversation. As I say, there, “I know we’re usually encouraged to not discuss anything as gauche as cash, in Western Society, but since we’re somehow still using a system of psychologically transferred and collectively-agreed-upon value to determine who gets to eat food, I say fuck it. Let’s talk it out.”

So please take a look, there, then tell your friends.

The Technoccult Tumblr is here.

Twitter handles are @Wolven and @Techn0ccult

The Perfunctory Facebook Page is here.

You can sign up for the newsletter here.

And as always, the Patreon is here.

That’s enough, for now. I need to go get back to work on some more substantive posts. See you next time. And thanks.

Mindful Cyborgs: What is Post-Nihilism? Part 2

The second installment of our interview with synthetic zero‘s Arran James and Michael Pyska. This time around, we talk about post-nihilism as political therapeutics, Stoicism and what we would do on the eve of human extinction.

Download and Notes: Mindful Cyborgs: Post Nihilistic Whispers Part 2

You can find part one here.

Mindful Cyborgs: What Is Post-Nihilism?

First of all, starting this week Mindful Cyborgs has a new regular co-host: Sara M. Watson. This episode, she, Chris Dancy and I interviewed Arran James and Michael Pyska of the “post-nihilist” website Synthetic Zero. We talked about Neitzsch, nihilism, post-nihilism, and Buddhism.

Tune in next week for the second half!

Mutation Vectors: A Fantastic Death Abyss

David Bowie Outside

Status Update

Just finished recording an episode of Mindful Cyborgs with Arran James and Michael Pyska of the “post-nihilist” website Syntheticzero.

It was a great episode, and I can’t wait for it to be online, but it’s left me in a weirder than usual headspace.


So what is post-nihilism? I should probably tell you to wait til the podcast is out. But in the meantime, here’s a bit from an article Arran and Michael wrote in the Occupied Times:

After nihilism, then, are embodied realisations of and exposures to vibrant ecologies of being offering an ultimately untameable wilderness which we participate in on an equal footing with all other bodies, even if we have an unequal ecological effect. In order to cope-with and cope-within the wilderness of being we must abandon the charnel-house of meaning and its theological tyrannies once and for all. As coping-beings we must leave our reifications behind in order to engage in post-nihilist praxis: an ecologistics of tracing these rhythms and activities, their multiple couplings and decouplings, and taking responsibility for our way of cohabiting in, with and alongside other bodies.


Finally getting around to playing A Dark Room, a text-based game I’ve mentioned before. It’s like Oregon Trail meets The Road. Dark stuff indeed.


I’ve been listening to David Bowie’s Outside a lot this week. It was the first Bowie album I ever heard, back when I was a teenage rivethead, but I hadn’t listened to it in a good 14 years. Back then I knew it was supposed to be a concept album, and that Bowie had worked with Brian Eno on it, but that was about it. From Wikipedia:

Bowie and Eno visited the Gugging psychiatric hospital near Vienna, Austria in early 1994 and interviewed and photographed its patients who were famous for their “Outsider Art.”[1] Bowie and Eno brought some of that art back with them into the studio[1] as they worked together in March 1994, coming up with a three-hour piece that was mostly dialog. Late in 1994, Q magazine asked Bowie to write a diary for 10 days (to later be published in the magazine), but Bowie, fearful his diary would be boring (“…going to a studio, coming home and going to bed”), instead wrote a diary for one of the fictional characters (Nathan Adler) from his earlier improvisation with Eno. Bowie said “Rather than 10 days, it became 15 years in his life!” This became the basis for the story of Outside.

Here’s the Adler diary.

I was never able to follow the narrative of Outside, but this page tries to unpack the songs and stitch the story together.

BTW, there were also some fantastic moments on the Outside tour, like Bowie singing “Scary Monsters,” “Reptile” and “Hurt” and others with Nine Inch Nails.

The Aesthetics of Noise

Torben Sangild writes:

Apollo represents appearance, form, individuality, beauty and dream; the Apollonian aesthetics is an embellishment of suffering, a self-conscious lie, a veiling of cruelty by use of form and elegance, a semblance of beauty. Dionysus, on the other hand, represents ecstasy, being, will, intoxication and unity; the Dionysian aesthetics is a direct confrontation with the terrible foundation of being, an absurd will driving us all in our meaningless lives. In the Dionysian ecstasy individuality is transgressed6 in favor of identification with the universal will – a frightening yet blissful experience. Frightening, that is, because it is a death-like giving up of the Ego, if only for a few seconds; blissful in letting go of the responsibilities of being a subject. The Dionysian experience is a “metaphysical comfort”, knowing that suffering is a necessary part of the effects of the eternal will – the destruction of things in order to create anew. In the Dionysian ecstasy one is no longer concerned with one’s individual suffering, seeing instead things from the universal point of view.

In music, the ecstasy of noise is undoubtedly a Dionysian effect, as opposed to the Apollonian melody and form.7 As mentioned above, the German words Rausch (ecstasy) and Geräusch (noise) are related, pointing towards this fact. The Dionysian is that which is not totally controlled or formed, e.g. screams and noises. The Apollonian elements are seductive, inciting the listener to enter the ecstatic bliss of the Dionysian, enabling the listener to dare the confrontation with the dreadfulness of existence. Therefore, Nietzsche says, the Dionysian needs the Apollonian.

Merzbow is so demanding exactly because he refuses this; he does not soften the harshness of noise with any Apollonian elements. Listening to Merzbow is thus a very different experience from the Sonic Youth maelstrom.

One of the reasons for the ecstatic effect of noise is its sublime character. The sublime is that which exceeds the limits of the senses, perceived as chaos or vastness. Despite our ability to put these words to it, the sublime goes beyond making sense – we never really understand it. The complexity of noise (in the acoustic sense) overloads the ears and the nervous system and is perceived as an amorphous mass, incomprehensible yet stirring. The delight of the sublime is the satisfaction of confronting the unfathomable.

Full Story: Ubu Web: The Aesthetics of Noise

(Thanks Adam and Ryan!)

The effect of diminished belief in free will

Tom Stafford wrote:

Psychologists have used this section of the book, or sentences taken from it or inspired by it, to induce feelings of determinism in experimental subjects. A typical study asks people to read and think about a series of sentences such as “Science has demonstrated that free will is an illusion”, or “Like everything else in the universe, all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules”.

The effects on study participants are generally compared with those of other people asked to read sentences that assert the existence of free will, such as “I have feelings of regret when I make bad decisions because I know that ultimately I am responsible for my actions”, or texts on topics unrelated to free will.

And the results are striking. One study reported that participants who had their belief in free will diminished were more likely to cheat in a maths test. In another, US psychologists reported that people who read Crick’s thoughts on free will said they were less likely to help others. […]

This puts an extra burden of responsibility on philosophers, scientists, pundits and journalists who use evidence from psychology or neuroscience experiments to argue that free will is an illusion. We need to be careful about what stories we tell, given what we know about the likely consequences.

Fortunately, the evidence shows that most people have a sense of their individual freedom and responsibility that is resistant to being overturned by neuroscience. Those sentences from Crick’s book claim that most scientists believe free will to be an illusion. My guess is that most scientists would want to define what exactly is meant by free will, and to examine the various versions of free will on offer, before they agree whether it is an illusion or not.

Full Story: Mind Hacks: The effect of diminished belief in free will

Interesting stuff, especially when considered alongside the Milgram experiments, which turned out not to be very sound. It also brings to mind the Kitty Genovese myth. If this effect is real, it is important to be aware of it so that we can try to overide it in ourselves.

© 2018 Technoccult

Theme by Anders NorénUp ↑