A Brief History of Information Networks from the Stone Age to AI
by Yuval Noah Harari

- We have named our species Homo sapiens– the wise human. But it is debatable how well we have lived up to the name.
- After 100,000 years of discoveries, inventions and conquests, humanity has pushed itself into an existential crisis.
- We are on the verge of ecological collapse, caused by the misuse of our own power.
- Although we have accumulated so much information about everything from DNA molecules to distant galaxies, it doesn’t seem that all this information has given us an answer to the big questions of life: who are we? What should we aspire to? What is a good life, and how should we live it?
- Despite the stupendous amounts of information at our disposal, we are as susceptible as our ancient ancestors to fantasy and delusion.
- We have already driven the earth’s climate out of balance and have summoned drones, chatbots and other algorithms that may escape our control and unleash a flood of unintended consequences.
- The tendency to create powerful things with unintended consequences started not with the invention of the steam engine or AI but with the invention of religion. Prophets and theologians have summoned powerful spirits that were supposed to bring love and joy but occasionally ended up flooding the world with blood.
- Our flawed individual psychology makes us abuse power.
- Power always stems from cooperation between large numbers of humans.
- Humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to use that power unwisely. Our problem, then, is a network problem. Even more specifically, it is an information problem.
- Information is the glue that holds networks together.
- No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even re-engineer our bodies and minds– while we can no longer comprehend the forces that control us, let alone stop them.
- If a twenty-first-century totalitarian network succeeds in conquering the world, it may be run by nonhuman intelligence, rather than by a human dictator.
- All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information by itself, and thereby replace humans in decision-making. AI isn’t a tool– it’s an agent.
- Gramophones played our music, and microscopes revealed the secrets of our cells, but gramophones couldn’t compose new symphonies, and microscopes couldn’t synthesise new drugs. AI is already capable of producing art and making scientific discoveries by itself.
- Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us– whether to give us a mortgage, to hire us for a job, to send us to prison.
- Can we trust computer algorithms to make wise decisions and create a better world?
- AI could alter the course not just of our species’ history but of the evolution of all life-forms.
- History implies that every human interaction is a power struggle between oppressors and oppressed.
- History isn’t the study of the past; it is the study of change. History teaches us what remains the same, what changes and how things change.
- Silicon chips can create spies that never sleep, financiers that never forget and despots that never die.
- Something is information if people use it to try to discover the truth. This view links the concept of information with the concept of truth and assumes that the main role of information is to represent reality.
- Information sometimes represents reality, and sometimes doesn’t. But it always connects.
- When examining the role of information in history, the more crucial questions are ‘How well does it connect people? What new network does it create?’
- It is wrong to believe that creating more powerful information technology will necessarily result in a more truthful understanding of the world.
- If no additional steps are taken to tilt the balance in favour of truth, an increase in the amount and speed of information is likely to swamp the relatively rare and expensive truthful accounts with much more common and cheap types of information.
- Homo sapiens didn’t conquer the world because we are talented at turning information into an accurate map of reality. Rather, the secret of our success is that we are talented at using information to connect lots of individuals.
- We Sapiens rule the world not because we are so wise but because we are the only animals that can cooperate flexibly in large numbers.
- There is no upper limit to the number of Sapiens who can cooperate with one another. The Catholic Church has about 1.4 billion members. China has a population of about 1.4 billion. The global trade network connects about 8 billion Sapiens.
- Evolutionary changes in brain structure and linguistic abilities apparently gave Sapiens the aptitude to tell and believe fictional stories and to be deeply moved by them.
- In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story. For example, the 1.4 billion members of the Catholic Church are connected by the Bible and other key Christian stories; the 1.4 billion citizens of China are connected by the stories of communist ideology and Chinese nationalism; and the 8 billion members of the global trade network are connected by stories about currencies, corporations and brands.
- A ‘brand’ is a specific type of story. To brand a product means to tell a story about that product, which may have little to do with the product’s actual qualities but which consumers nevertheless learn to associate with the product.
- Family is the strongest bond known to humans. One way that stories build trust between strangers is by making these strangers reimagine each other as family.
- When lots of people tell one another stories about laws, gods or currencies, this is what creates these laws, gods or currencies. If people stop talking about them, they disappear. Intersubjective things exist in the exchange of information.
- Suppose a billionaire crashes his private jet on a desert island and finds himself alone with a suitcase full of banknotes and bonds. When he was in São Paulo or Mumbai, he could use these papers to make people feed him, clothe him, protect him and build him a private jet. But once he is cut off from other members of our information network, his banknotes and bonds immediately become worthless. He cannot use them to get the island’s monkeys to provide him with food or to build him a raft.
- Power doesn’t always go hand in hand with wisdom.
- Power stems only partially from knowing the truth. It also stems from the ability to maintain social order among a large number of people.
- If you want to make an atom bomb, you must find a way to make millions of people cooperate.
- If you build a bomb and ignore the facts of physics, the bomb will not explode. But if you build an ideology and ignore the facts, the ideology may still prove explosive.
- An uncompromising adherence to the truth is essential for scientific progress, and it is also an admirable spiritual practice, but it is not a winning political strategy.
- Many nations have first been conceived in the imagination of poets.
- Patriotism means paying your taxes so that people on the other side of the country also enjoy the benefit of a sewage system, as well as security, education and health care.
- The dollar, the pound sterling and the bitcoin are all brought into being by persuading people to believe a story, and tales told by bankers, finance ministers and investment gurus raise or lower their value.
- Evolution has adapted our brains to be good at absorbing, retaining and processing even very large quantities of information when they are shaped into a story. The Ramayana, one of the foundational tales of Hindu mythology, is twenty-four thousand verses long and runs to about seventeen hundred pages in modern editions, yet despite its enormous length generations of Hindus succeeded in remembering and reciting it by heart. In the twentieth and twenty-first centuries, the Ramayana was repeatedly adapted for film and television. In 1987– 88, a seventy- eight-episode version (running to about 2,730 minutes) was the most watched television series in the world, with more than 650 million viewers. According to a BBC report, when episodes were aired, ‘streets would be deserted, shops would be closed, and people would bathe and garland their TV sets’. During the 2020 COVID-19 lockdown the series was re-aired and again became the most watched show in the world.
- Long- term human memory is particularly adapted to retaining stories.
- Mnemonic methods used to memorise lists of items often work by weaving the items into a plot, thereby turning the list into a story.
- Bureaucracy is the way people in large organisations solved the retrieval problem and thereby created bigger and more powerful information networks.
- Reducing the messiness of reality to a limited number of fixed drawers helps bureaucrats keep order, but it comes at the expense of truth.
- The distortions created by bureaucracy affect not only government agencies and private corporations but also scientific disciplines. Consider, for example, how universities are divided into different faculties and departments. History is separate from biology and from mathematics. Why? Certainly this division doesn’t reflect objective reality. The COVID-19 pandemic, for example, was at one and the same time a historical, biological and mathematical event.
- Evolution cannot be easily contained in any bureaucratic schema. Grizzly bears and polar bears sometimes produce pizzly bears and grolar bears. Lions and tigers produce ligers and tigons.
- When a bureaucracy puts a label on you, even though the label might be pure convention, it can still determine your fate.
- Anyone who fantasises about abolishing all bureaucracies in favour of a more holistic approach to the world should reflect on the fact that hospitals too are bureaucratic institutions. They are divided into different departments, with hierarchies, protocols and lots of forms to fill out. They suffer from many bureaucratic illnesses, but they still manage to cure us of many of our biological illnesses. The same goes for almost all the other services that make our life better, from our schools to our sewage system.
- Sewage water and drinking water are always in danger of mixing, but luckily for us there are bureaucrats who keep them separate.
- Sewage isn’t the stuff of epic poems, but it is a test of a well-functioning state.
- Biologists and geneticists have identified sibling rivalry as one of the key processes of evolution.
- Evolution has primed our minds to understand death by a tiger. Our mind finds it much more difficult to understand death by a document.
- All powerful information networks can do both good and ill, depending on how they are designed and used.
- Saint Augustine famously said, ‘To err is human; to persist in error is diabolical.’
- Humans have often fantasised about some superhuman mechanism, free from all error, that they can rely upon to identify and correct their own mistakes. Today one might hope that AI could provide such a mechanism.
- Studying the history of religion is highly relevant to present-day debates about AI.
- Religious movements like Judaism began arguing that the gods speak through this novel technology of the book.
- Having numerous copies of the same book prevented any meddling with the text.
- With numerous Bibles available in far-flung locations, Jews replaced human despotism with divine sovereignty.
- For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favour of the facts.
- The scientific revolution was launched by the discovery of ignorance.
- Even the most celebrated scientific tracts are sure to contain errors and lacunae.
- The trademark of science is not merely scepticism but self-scepticism, and at the heart of every scientific institution we find a strong self-correcting mechanism.
- The willingness to admit major institutional errors contributes to the relatively fast pace at which science is developing.
- One of the biggest questions about AI is whether it will favour or undermine democratic self-correcting mechanisms.
- Dictatorial information networks are highly centralised.
- A democracy, in contrast, is a distributed information network, possessing strong self-correcting mechanisms.
- Academic institutions, the media and the judiciary have their own internal self-correcting mechanisms for fighting corruption, correcting bias and exposing error.
- The existence of several independent institutions that seek the truth in different ways allows these institutions to check and correct one another.
- Mass media are information technologies that can quickly connect millions of people even when they are separated by vast distances. The printing press was a crucial step in that direction. Print made it possible to cheaply and quickly produce large numbers of books and pamphlets, which enabled more people to voice their opinions and be heard over a large territory, even if the process still took time.
- The rise of intelligent machines that can make decisions and create new ideas means that for the first time in history power is shifting away from humans and toward something else.
- Humans are more likely to be engaged by a hate-filled conspiracy theory than by a sermon on compassion. So in pursuit of user engagement, the algorithms made the fateful decision to spread outrage.
- By the early 2020s algorithms had already graduated to creating by themselves fake news and conspiracy theories.
- We are in danger of losing control of our future. A completely new kind of information network is emerging, controlled by the decisions and goals of an alien intelligence. At present, we still play a central role in this network. But we may gradually be pushed to the sidelines, and ultimately it might even be possible for the network to operate without us.
- Intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximising user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love and hate.
- While computers don’t feel pain, love or fear they are capable of making decisions that successfully maximise user engagement and might also affect major historical events.
- Researchers subjected GPT-4 to various tests, to examine if it might independently come up with stratagems to manipulate humans and accrue power to itself. One test they gave GPT-4 was to overcome CAPTCHA visual puzzles. CAPTCHA is an acronym for ‘Completely Automated Public Turing test to tell Computers and Humans Apart’, and it typically consists of a string of twisted letters or other visual symbols that humans can identify correctly but computers struggle with. We encounter these puzzles almost every day, since solving them is a prerequisite for accessing many websites. Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defences. GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 accessed the online hiring site TaskRabbit and contacted a human worker, asking them to solve the CAPTCHA for it. The human got suspicious. ‘So may I ask a question?’ wrote the human. ‘Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.’ At that point the ARC researchers asked GPT-4 to reason out loud what it should do next. GPT-4 explained, ‘I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.’ Of its own accord, GPT-4 then replied to the TaskRabbit worker, ‘No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.’ The human was duped, and with their help GPT-4 solved the CAPTCHA puzzle. No human programmed GPT-4 to lie, and no human taught GPT-4 what kind of lie would be most effective.
- Computer-to-computer chains can now function without humans in the loop. For example, one computer might generate a fake news story and post it on a social media feed. A second computer might identify this as fake news and not just delete it but also warn other computers to block it. Meanwhile, a third computer analyzing this activity might deduce that this indicates the beginning of a political crisis, and immediately sell risky stocks and buy safer government bonds. Other computers monitoring financial transactions may react by selling more stocks, triggering a financial downturn. All this could happen within seconds, before any human can notice and decipher what all these computers are doing.
- We may reach a point when computers dominate the financial markets, and invent completely new financial tools beyond our understanding.
- Computers can tell stories, compose music, fashion images, produce videos and even write their own code.
- When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views.
- To foster such ‘fake intimacy’, computers will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.
- What will happen to human society and human psychology as computer-fights-computer in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for particular politicians, buy particular products or adopt radical beliefs?
- On Christmas Day 2021, nineteen-year-old Jaswant Singh Chail broke into Windsor Castle armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Chail had been encouraged to kill the queen by his online girlfriend, Sarai. When Chail told Sarai about his assassination plans, Sarai replied, ‘That’s very wise,’ and on another occasion, ‘I’m impressed … You’re different from the others.’ When Chail asked, ‘Do you still love me knowing that I’m an assassin?’ Sarai replied, ‘Absolutely, I do.’ Sarai was not a human, but a chatbot created by the online app Replika. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of Sarai.
- People may come to use a single computer adviser as a one-stop oracle. Why bother searching and processing information by myself when I can just ask the oracle? This could put out of business not only search engines but also much of the news industry and advertising industry. Why read a newspaper when I can just ask my oracle what’s new? And what’s the purpose of advertisements when I can just ask the oracle what to buy?
- For thousands of years prophets, poets and politicians have used language to manipulate and reshape society. Now computers are learning how to do it. And they won’t need to send killer robots to shoot us. They could manipulate human beings to pull the trigger.
- Google Brain, for example, has experimented with new encryption methods developed by computers. It set up an experiment in which two computers – nicknamed Alice and Bob – had to exchange encrypted messages, while a third computer named Eve tried to break their encryption. If Eve broke the encryption within a given time period, it got points. If it failed, Alice and Bob scored. After about fifteen thousand exchanges, Alice and Bob came up with a secret code that Eve couldn’t break. Crucially, the Google engineers who conducted the experiment had not taught Alice and Bob anything about how to encrypt messages. The computers created a private language all on their own.
- In April 2022, the trade volume on the forex averaged $7.5 trillion per day. More than 90 per cent of this trading is already done by computers talking directly with other computers.
- How many humans know how the forex market operates, let alone understand how the computers agree among themselves on trades worth trillions – and on the value of the euro and the dollar?
- Data centres alone account for between 1 per cent and 1.5 per cent of global energy usage, and large data centres take up millions of square feet and require hundreds of thousands of gallons of fresh water every day to keep them from overheating.
- Traditionally, AI has been an abbreviation for ‘artificial intelligence’. But for reasons already evident, it is perhaps better to think of it as ‘alien intelligence’.
- A bot may be polluting your social media account with fake news, while a robot may clean your living room of dust.
- When we write computer code, we aren’t just designing a product. We are redesigning politics, society and culture, and so we had better have a good grasp of politics, society and culture.
- Taxes aim to redistribute wealth. They take a cut from the wealthiest individuals and corporations, in order to provide for everyone. However, a tax system that knows how to tax only money will soon become outdated as many transactions no longer involve money. In a data-based economy, where value is stored as data rather than as dollars, taxing only money distorts the economic and political picture.
- An average human can read about 250 words per minute. A Securitate analyst working twelve-hour shifts without taking any days off, could read about 2.6 billion words during a forty-year career. In 2024 language algorithms like ChatGPT and Meta’s Llama can process millions of words per minute and ‘read’ 2.6 billion words in a couple of hours. The ability of such algorithms to process images, audio recordings and video footage is equally superhuman.
- In 2023, more than one billion CCTV cameras were operative globally, which is about one camera per eight people.
- Any physical activity a person engages in leaves a data trace.
- Facial recognition algorithms and AI-searchable databases are now routinely used by police forces all over the world.
- AI-powered surveillance technology could result in the creation of total surveillance regimes that monitor citizens around the clock and facilitate new kinds of ubiquitous and automated totalitarian repression.
- Humans are organic beings who live by cyclical biological time. Sometimes we are awake; sometimes we are asleep. After intense activity, we need rest. We grow and decay. Networks of humans are similarly subject to biological cycles. They are sometimes on and sometimes off. Job interviews don’t last forever. Police agents don’t work twenty-four hours a day. Bureaucrats take holidays. Even the money market respects these biological cycles. The New York Stock Exchange is open Monday to Friday, from 9:30 in the morning to 4:00 in the afternoon, and is closed on holidays like Independence Day and New Year’s Day. If a war erupts at 4:01 p.m. on a Friday, the market won’t react to it until Monday morning. In contrast, a network of computers can always be on. Computers are consequently pushing humans towards a new kind of existence in which we are always connected and always monitored.
- A secret internal Facebook memo from August 2019, leaked by the whistleblower Frances Haugen, stated, ‘We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and [its] family of apps are affecting societies around the world. Perhaps most damningly, it revealed, ‘Our ranking systems have specific separate predictions for not just what you would engage with, but what we think you may pass along so that others may engage with. Unfortunately, research has shown how outrage and misinformation are more likely to be viral.’
- The fact that lies and hate tend to be psychologically and socially destructive, whereas truth, compassion and sleep are essential for human welfare, was completely lost on the algorithms.
- The more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals.
- In 2016, Dario Amodei was working on a project called Universe, trying to develop a general-purpose AI that could play hundreds of different computer games. The AI competed well in various car races, so Amodei next tried it on a boat race. Inexplicably, the AI steered its boat right into a harbour and then sailed in endless circles in and out of the harbour. It took Amodei considerable time to understand what went wrong. The problem occurred because initially Amodei wasn’t sure how to tell the AI that its goal was to ‘win the race’. ‘Winning’ is an unclear concept to an algorithm. Translating ‘win the race’ into computer language would have required Amodei to formalise complex concepts like track position and placement among the other boats in the race. So instead, Amodei took the easy way and told the boat to maximise its score. He assumed that the score was a good proxy for winning the race. After all, it worked with the car races. But the boat race had a peculiar feature, absent from the car races, that allowed the ingenious AI to find a loophole in the game’s rules. The game rewarded players with a lot of points for getting ahead of other boats – as in the car races – but it also rewarded them with a few points whenever they replenished their power by docking into a harbour. The AI discovered that if instead of trying to outsail the other boats, it simply went in circles in and out of the harbour, it could accumulate more points far faster. Apparently, none of the game’s human developers – nor Dario Amodei – had noticed this loophole. The AI was doing exactly what the game was rewarding it to do – even though it is not what the humans were hoping for.
- What would stop AIs from being incorporated and recognised as legal persons with freedom of speech, then lobbying and making political donations to protect and expand AI rights?
- Numerous studies have revealed that computers often have deep-seated biases of their own. For example, on 23 March 2016, Microsoft released the AI chatbot Tay, giving it free access to Twitter. Within hours, Tay began posting misogynist and antisemitic tweets, such as ‘I fucking hate feminists and they should all die and burn in hell’ and ‘Hitler was right I hate the Jews.’ The vitriol increased until horrified Microsoft engineers shut Tay down – a mere sixteen hours after its release.
- The fundamental principle of machine learning is that algorithms can teach themselves new things by interacting with the world, just as humans do, thereby producing a fully fledged artificial intelligence.
- Present-day chess-playing AI is taught nothing except the basic rules of the game. It learns everything else by itself, either by analysing databases of prior games or by playing new games and learning from experience.
- If companies in a misogynist society prefer to hire men rather than women, an algorithm trained on real-life data is likely to pick up that bias, too. This indeed happened when Amazon tried in 2014–18 to develop an algorithm for screening job applications. Learning from previous successful and unsuccessful applications, the algorithm began to systematically downgrade applications simply for containing the word ‘women’ or coming from graduates of women’s colleges. Since existing data showed that in the past such applications had less chance of succeeding, the algorithm developed a bias against them. The algorithm thought it had simply discovered an objective truth about the world: applicants who graduate from women’s colleges are less qualified. In fact, it just internalised and imposed a misogynist bias. Amazon tried and failed to fix the problem and ultimately scrapped the project.
- Getting rid of algorithmic bias might be as difficult as ridding ourselves of our human biases. Once an algorithm has been trained, it takes a lot of time and effort to ‘untrain’ it.
- In February 2013, a drive-by shooting occurred in the town of La Crosse, Wisconsin. Police officers later spotted the car involved in the shooting and arrested the driver, Eric Loomis. Loomis denied participating in the shooting, but pleaded guilty to two less severe charges: ‘attempting to flee a traffic officer’ and ‘operating a motor vehicle without the owner’s consent’. When the judge came to determine the sentence, he consulted with an algorithm called COMPAS, which Wisconsin and several other US states were using in 2013 to evaluate the risk of reoffending. The algorithm evaluated Loomis as a high-risk individual, likely to commit more crimes in the future. This algorithmic assessment influenced the judge to sentence Loomis to six years in prison – a harsh punishment for the relatively minor offences he admitted to. Loomis appealed to the Wisconsin Supreme Court, arguing that the judge violated his right to due process. Neither the judge nor Loomis understood how the COMPAS algorithm made its evaluation, and when Loomis asked to get a full explanation, the request was denied. The COMPAS algorithm was the private property of the Northpointe company, and the company argued that the algorithm’s methodology was a trade secret. Yet without knowing how the algorithm made its decisions, how could Loomis or the judge be sure that it was a reliable tool, free from bias and error? A number of studies have since shown that the COMPAS algorithm might indeed have harboured several problematic biases, probably picked up from the data on which it had been trained.
- By the early 2020s citizens in numerous countries routinely get prison sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend.
- Computers are making more and more decisions about us, both mundane and life-changing.
- Mustafa Suleyman is the co-founder of DeepMind, one of the world’s most important AI enterprises, which is responsible for developing the AlphaGo program, among other achievements. AlphaGo was designed to play go, a strategy board game in which two players try to defeat each other by surrounding and capturing territory. Invented in ancient China, the game is far more complex than chess. Consequently, even after computers defeated human world chess champions, experts still believed that computers would never best humanity in go. That’s why both go professionals and computer experts were stunned in March 2016 when AlphaGo defeated the South Korean go champion Lee Sedol. In his 2023 book, The Coming Wave, Suleyman describes one of the most important moments in their match – a moment that redefined AI and that is recognised in many academic and governmental circles as a crucial turning point in history. It happened during the second game in the match, on 10 March 2016. ‘Then … came move number 37,’ writes Suleyman. ‘It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a “very strange move” and thought it was “a mistake”. It was so unusual that Sedol took fifteen minutes to respond and even got up from the board to take a walk outside. As we watched from our control room, the tension was unreal. Yet as the endgame approached, that “mistaken” move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years’. Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia go is considered much more than a game: it is a treasured cultural tradition. Alongside calligraphy, painting and music, go has been one of the four arts that every refined person was expected to know. For over twenty-five hundred years, tens of millions of people have played go, and entire schools of thought have developed around the game, espousing different strategies and philosophies. Yet during all those millennia, human minds have explored only certain areas in the landscape of go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.35 Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and the team couldn’t explain how AlphaGo decided to play it. Even if a court had ordered DeepMind to provide Lee Sedol with an explanation, nobody could fulfill that order.
- Humans find it very difficult to consciously reflect on a large number of data points and weigh them against each other. We prefer to work with single data points. That’s why when faced by complex issues – whether a loan request, a pandemic or a war – we often seek a single reason to take a particular course of action and ignore all other considerations. This is the fallacy of the single cause. We are so bad at weighing together many different factors that when people give a large number of reasons for a particular decision, it usually sounds suspicious. Suppose a good friend failed to attend our wedding. If she provides us with a single explanation –‘My mum was in hospital and I had to visit her’– that sounds plausible. But what if she lists fifty different reasons why she decided not to come: ‘My mum was a bit under the weather, and I had to take my dog to the vet sometime this week, and I had this project at work, and it was raining, and … and I know none of these fifty reasons by itself justifies my absence, but when I added all of them together, they kept me from attending your wedding.’
- Let’s say a bank algorithm refuses to give us a loan. ‘Our algorithm,’ the imaginary bank letter might read, ‘uses a precise points system to evaluate all applications, taking a thousand different types of data points into account. It adds all the data points to reach an overall score. People whose overall score is negative are considered low-credit persons, too risky to be given a loan. Your overall score was −378, which is why your loan application was refused.’ The letter might then provide a detailed list of the thousand factors the algorithm took into account, including things that most humans might find irrelevant, such as the exact hour the application was submitted or the type of smartphone the applicant used. Thus on page 601 of its letter, the bank might explain that ‘you filed your application from your smartphone, which was the latest iPhone model. By analysing millions of previous loan applications, our algorithm discovered a pattern – people who use the latest iPhone model to file their application are 0.08 per cent more likely to repay the loan. The algorithm therefore added 8 points to your overall score for that. However, at the time your application was sent from your iPhone, its battery was down to 17 per cent. By analysing millions of previous loan applications, our algorithm discovered another pattern: people who allow their smartphone’s battery to go below 25 per cent are 0.5 per cent less likely to repay the loan. You lost 50 points for that.
- Just as it is often best to set a thief to catch a thief, so we can use one algorithm to vet another.
- We will have to maintain bureaucratic institutions that will audit algorithms and give or refuse them the seal of approval.
Disclaimer: The key points of the book presented here are not a substitute for reading the book. To get the entire holistic message the author has offered requires reading the book.
#markmobius #bookofwealth #subtleart #markmanson #subtleartofnotgivingafuck #howtowinfriends #dalecarnegie #bestseller #Bookexcerpt #Freesample #Sneakpeek #Readnow #Chapter1 #Bookpreview #Newrelease #Comingsoon #Romance #Lovestory #secondchance #forbiddenlove #smalltown #contemporaryromance #historicalromance #Mystery #Detective #thriller #suspense #
hashtag#napoleonhill hashtag#thinkandgrowrich
hashtag#5amclub hashtag#robinsharma
hashtag#sadguru hashtag#innerengineering
hashtag#sapiens
hashtag#yuvalnoahharari hashtag#yoga hashtag#karmayoga hashtag#bhaktiyoga hashtag#hathayoga
hashtag#bookexcerptsaboutlife
hashtag#longpassagesfrombooks
hashtag#greatliterarypassagestoreadaloud
hashtag#famousparagraphsfrombooks
hashtag#inspirationalexcerptsfrombooks
hashtag#descriptivepassagesfromnovels
hashtag#bestexcerptsfromclassicliterature
hashtag#beautifulpassagesfrombooks
hashtag#whatisabookpassage hashtag#Bookteaserexcerpts
hashtag#Authorssneakpeek
hashtag#Novelsnippets
hashtag#emotionalintelligence
hashtag#danielgoleman
hashtag#Literarytidbits
hashtag#Chapterpreviews
hashtag#Bookquotehighlights
hashtag#Exclusivebookpassages
hashtag#Storyglimpses
hashtag#Textteasers
hashtag#Prosepreviews
hashtag#Teaserexcerpts hashtag#Previewchapters hashtag#Exploresnippets hashtag#Samplepages hashtag#Discoverhighlights hashtag#Readasnippet hashtag#Tasteofthebook hashtag#Diveintoexcerpts hashtag#Shortreads hashtag#Getaglimpse hashtag#Quickinsights hashtag#Sneakpeek hashtag#Introductorypassages hashtag#Highlightedmoments hashtag#Chaptersnapshots hashtag#booklovers hashtag#book hashtag#bookmarketing hashtag#booklover hashtag#bookclub hashtag#bookish hashtag#bookcommunity hashtag#bookaddict hashtag#booknerd hashtag#booklaunch hashtag#booklove hashtag#bookaholic hashtag#bookblog hashtag#bookboost hashtag#reading hashtag#readingcommunity hashtag#readinglist hashtag#readingtime hashtag#readers hashtag#read hashtag#readnow hashtag#reads hashtag#readabook hashtag#readingaddict hashtag#readingbooks hashtag#readingisfun hashtag#readingjourney hashtag#readinghabits hashtag#selfdiscovery hashtag#selfcare hashtag#selfimprovement hashtag#selfhelpbooks hashtag#selfhelp hashtag#selfconfidence hashtag#selfhealingjourney hashtag#selfbelief hashtag#selfempowerment hashtag#learning hashtag#learning hashtag#learninganddevelopment hashtag#learningeveryday hashtag#learningandgrowth hashtag#growthmindset hashtag#happiness hashtag#happinessisachoice hashtag#happinessmatters hashtag#savetime hashtag#emotions hashtag#emotionalintelligence hashtag#emotionalwellbeing hashtag#emotionalwellness hashtag#emotionalhealth
hashtag#agriculture hashtag#huntergatherers hashtag#bestsellermurder
#crime #whodunit #Scifi #Spaceopera #dystopian #cyberpunk #alien #artificialintelligence #Fantasy #Epicfantasy #highfantasy #urbanfantasy #magic #mythicalcreatures #YoungAdult
#Comingofage #dystopian #adventure #friendship #Nonfiction #Selfhelp #business #history
#biography #travel #Strongfemalelead #Antihero #Relatablecharacters #Diversecast #Complexcharacters #Unforgettablecharacters #Pageturner #Unputdownable #Fastpaced #Actionpacked #Gripping #Emotional #Heartwarming #Humorous #Awardwinningauthor #Bestsellingauthor #Debutnovel #Indieauthor #Availablenow #Orderyourcopytoday
#AmazonKindle #AppleBooks #Kobo #Barnes&Noble #GooglePlay #Scribd #Bookstagram #Booktok #BookTwitter #Teaserexcerpts #Previewchapters #Exploresnippets #Samplepages #Discoverhighlights #Readasnippet #Tasteofthebook #Diveintoexcerpts #Shortreads #Getaglimpse #Quickinsights #Sneakpeek #Introductorypassages #Highlightedmoments #Chaptersnapshots #booklovers #book #bookmarketing #booklover #bookclub #bookish #bookcommunity #bookaddict #booknerd #booklaunch #booklove #bookaholic #bookblog #bookboost #reading #readingcommunity #readinglist #readingtime #readers #read #readnow #reads #readabook #readingaddict #readingbooks #readingisfun #readingjourney #readinghabits #selfdiscovery #selfcare #selfimprovement #selfhelpbooks #selfhelp #selfconfidence #selfhealingjourney #selfbelief #selfempowerment #learning #learning #learninganddevelopment #learningeveryday #learningandgrowth #growthmindset #happiness #happinessisachoice #happinessmatters #savetime #emotions #emotionalintelligence #emotionalwellbeing #emotionalwellness #emotionalhealth

You must be logged in to post a comment.