Chat with us, powered by LiveChat Book Review (750 words): You must choose between: ‘The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology’ or ‘Unmasking AI: My Mission to Protect What - Writeden

 

Book Review (750 words):

You must choose between:

"The Battle for Your Brain: Defending the Right to Think Freely in the
Age of Neurotechnology"

or

"Unmasking AI: My Mission to Protect What is Human in a World of Machines"

The book review must include:

1. What is the problem or issue, why is it important?

Give a concise overview of the problem or issue being addressed in the book you have chosen. This
overview should be given as the introduction, the reader should be able to walk away and describe the problem and why it matters. Note that when describing the importance of the issue it may be a good place to tie in its intersection with media psychology. Then move to stakeholders, law and policy and finally decisionmakers, if applicable.

2. Keep the PEST factors in mind.

Describe what you have learned in your reading and from class lectures, in the book review. Take into account
PEST: political, economic, social, and technological factors. All may not always apply but thinking of and addressing the ones that do will help guide your review.

3. Make a policy recommendation.

Once you have completed a PEST analysis you should have laid out evidence for what your proposed policy solutions or recommendations are. What are your solutions or recommendations based on your findings?
Describe the pros and cons. It is required that you account for different interest groups or stakeholders that would want to get involved on either side of the issue.

4. Give a summary.

Lastly, it will be helpful to again keep the PEST factors in mind when summarizing.

Neurotechnology can tell us: If we’re wired to be conservative or liberal, Whether our insomnia is as bad as we think, If we’re in love with someone or just “in lust.”

We can learn how we process risks and rewards and whether we’re congenitally disposed to be spendthrifts or tightwads.

Soon, smart football helmets will be able to diagnose concussions immediately after they occur.

Neurotech devices can also track changes in our brains over time, such as the slowing down of activities in certain brain regions associated with the onset of conditions like Alzheimer’s disease, schizophrenia, and dementia.

Not everyone wants to know if one of those conditions is in the cards for them, but those who do may benefit from having time to prepare.

It’s already begun at work.

In China, train drivers on the Beijing – Shanghai high – speed rail line, the busiest in the world, must wear EEG devices while they are behind the controls, to ensure that they are focused and alert.

According to some news sources, workers in government-controlled factories are required to wear EEG sensors to monitor their productivity and their emotional states, and they can be sent home based on what their brains reveal.

The same neuroscience that gives us intimate access to ourselves can allow companies, governments, and all kinds of actors who don’t necessarily have our best interests in mind access too.

As a legal scholar she finds this terrifying, nothing in the US Constitution, state and federal laws, or international treaties gives individuals even rudimentary sovereignty over their own brains.

“As an ethicist, lawyer, and philosopher, I believe that we can and should embrace emerging neurotechnology, but only if we first update our concept of liberty to maximize the benefits and minimize the risks of doing so.”

Does society have the right to prohibit us from slowing down our brains or extinguishing painful memories?

What will it mean if our thoughts and emotions are up for grabs, just like the rest of our data being commodified and sold by corporations?

Should employers be allowed to use that data as part of the growing trend of workplace surveillance?

A few questions

Are there any limits to corporations targeting our brains with their products?

Does freedom of thought protect us from government tracking our brains and mental processes?

Will unlocking our brains open our minds to targeted assaults and hacking, and if so, how do we protect ourselves against that risk?

A few questions

Is embracing neurotechnology necessary for the very survival of our species to compete against the growing capabilities of artificial intelligence?

A big question regarding AI

This book navigates each of these dilemmas and more to help us expand our definition of liberty in the modern era to include our right to cognitive liberty — the right to self – determination over our brains and mental experiences.

Anyone who values their ability to have private thoughts and ruminations — an “inner world” — should care about cognitive liberty.

We are at a pivotal moment in human history, in which control of our brains can be enhanced or lost.

We need to define the contours of cognitive liberty now or risk being too late to do so.

Unpack a new right to cognitive liberty and the bundle of rights it includes — mental privacy, freedom of thought, and self-determination — while making accessible the exciting and often startling neuroscience of tracking and hacking the human brain.

“My interest in cognitive liberty goes to the very heart of who I am. It affects you just as deeply, but so far, the issue hasn’t sparked nearly so much concern as I believe it should.

I suspect people are more or less complacent because they don’t yet understand or believe the far-reaching implications of these new technologies.

As an Iranian American with extended family still living in Iran, I have witnessed the chilling effect of government censorship and surveillance on individual liberties, but also the power of technology to mobilize people for change.”

How technology is used is crucial in defining our cognitive liberty.

Exercising a choice to take drugs that changes my brain or to plug in to machines that allow doctors to read it is quite different from if I had been forcibly administered those drugs or if doctors had monitored me without my consent.

With our DNA already up for grabs and our smartphones broadcasting our every move, our brains are increasingly the final frontier for privacy.

Our laws’ ability to keep up with technological change.

Take the First Amendment of the US Constitution, which protects freedom of speech. Does it also protect freedom of thought?

Does it give us the freedom to alter our thoughts whenever and however we choose, or can the government or society put limitations on what we do with our own brains?

What about the Fifth Amendment?

What does it mean to be protected from self – incrimination when the government can hook you up to a machine and find out what’s in your mind, whether you want to share it or not?

Can companies that we share our brain data with through their applications sell it to third parties? Right now, no laws prevent them from doing so.

We must establish the right to cognitive liberty — to protect our freedom of thought and rumination , mental privacy, and self – determination over our brains and mental experiences.

,

The male gaze decides which subjects are desirable and worthy of attention, and it determines how they are to be judged. You may also be familiar with the white gaze, which similarly privileges the representation and stories of white Europeans and their descendants.

“ An unseen force is rising … that I call the coded gaze. It is spreading like a virus .”

Inspired by these terms, the coded gaze describes the ways in which the priorities, preferences, and prejudices of those who have the power to shape technology can propagate harm, such as discrimination and erasure. We can encode prejudice into technology even if it is not intentional.

The coded gaze does not have to be explicit to do the job of oppression.

Algorithmic bias occurs when one group is better served than another by an AI system. If you are denied employment because an AI system screened out candidates that attended women’s colleges, you have experienced algorithmic bias.

“In my work, I use the coded gaze term as a reminder that the machines we build reflect the priorities, preferences, and even prejudices of those who have the power to shape technology.”

Like systemic forms of oppression, including patriarchy and white supremacy, it is programmed into the fabric of society. Without intervention, those who have held power in the past continue to pass that power to those who are most like them. This does not have to be intentional to have a negative impact.

In the years since I first encountered the coded gaze, the promise of AI has only become grander.

“It will overcome human limitations, AI developers tell us, and generate great wealth.”

While AI research and development has been going on for decades, in the year 2023 it seemed the whole world was suddenly talking about AI with fear and fascination.

Generative AI products are only one manifestation of AI.

Predictive AI systems are already used to determine:

1. who gets a mortgage,

2. who gets hired, who gets admitted to college, and

3. who gets medical treatment — but products like ChatGPT have brought AI to new levels of public engagement and awareness.

AI? Can we make room for the best of what AI has to offer while also resisting its perils?

In a world where decisions about our lives are increasingly informed by algorithmic decision – making, we cannot have racial justice if we adopt technical tools for the criminal legal system that only further incarcerate communities of color.

We cannot have gender equality if we employ AI tools that use historic hiring data that reflect sexist practices to inform future candidate selections that disadvantage women and gender minorities.

We cannot say we are advocating for disability rights and create AI – powered tools that erase the existence of people who are differently abled by adopting ableist design patterns.

We cannot claim to respect privacy rights and then have our school systems adopt AI – powered surveillance systems, or capialist surveillance systems that reduce children to data to be sorted, tracked…

If the AI systems we create to power key aspects of society — from education to healthcare, from employment to housing — mask discrimination and systematize harmful bias, we entrench algorithmic injustice.

We swap fallible human gatekeepers for machines that are also flawed but assumed to be objective.

And…

…when machines fail, the people who often have the least resources and most limited access to power structures are those who have to experience the worst outcomes.

Power

AI will not solve poverty, because the conditions that lead to societies that pursue profit over people are not technical.

AI will not solve discrimination, because the cultural patterns that say one group of people is better than another because of their gender, their skin color, the way they speak, their height, or their wealth are not technical.

AI will not solve climate change, because the political and economic choices that exploit the earth’s resources are not technical.

As Dr . Rumman Chowdhury reminds us in her work on AI accountability, the moral outsourcing of hard decisions to machines does not solve the underlying social dilemmas.

In the early twentieth century, civic organizations used the phrase “ justice league ” in their fight for women’s suffrage ( “ The Equal Justice League of Young Women ” [ 1911 ] ),

Racial equality and civil rights for African Americans ( “ Race Justice League ” [ 1923 ] ), and

Workers ’ rights ( “ Justice League” [ 1914 ] ).

Scores of justice – oriented organizations continue to tap into this tradition today.

Real – world justice leagues serve as inspiration for the belief that against tyranny, oppression, and erasure, we can choose to resist and offer pathways to liberation.

I positioned the emerging work I was doing with the Algorithmic Justice League to follow this banner.

Machines were presumed to be free from the societal biases that plague us mortals. My experiences were showing me otherwise.

Unless we know where the data comes from, who collected it, and how it is organized, we cannot know if ethical processes were used.

There can be billions of parameters in the systems used to build generative AI products that can create images from a line of text such as “an astronaut riding a horse in space.” LLMs can have trillions of parameters.

Simply because decisions are made by a computer analyzing data does not make them neutral. Neural does not equate to neutral.

…if an automated decision impacts your opportunities and liberties, you must have a voice and a choice in whether and how technology is used.

end