Connect with us

Uncategorized

Best Artificial Intelligence Quotes You Will See Today

Published

on

What is Artificial Intelligence?

Artificial intelligence (A.I.) refers to the imitation of human intelligence in machines. These machines become able to think and work like a human brain and mimic his actions. The machines are helpful in problem-solving and show traits of human intelligence.

How Artificial intelligence works?

We usually lack the understanding of what artificial intelligence is. We tend to refer artificial intelligence to robots because we see them in movies. We observe how robots act and play havoc to society by, sometimes, becoming uncontrollable.

While A.I. plays in the vast field of mimicking human-like tasks, its revolution covers almost Everything of technology today,i.e., chess-playing computer to self-driving cars.

To deeply understand its role in our life, we first need to know how it works.

As a layman approach, consider A.I. as a copy of the human mind. It mimics the principals of the human mind and executes tasks. Due to scientific revolutions, it can be able to conduct the most simple to most complex tasks. The principals of artificial intelligence include learning, reasoning, and perception.

The concept of A.I. is continuously changing, and its old benchmarks no longer define A.I. For example, primary calculating machines longer considered under A.I., and this function taken for granted. Since A.I. has evolved exceptionally in the last few decades, so new benchmarks keep coming in.

Why Artificial intelligence is important?

Artificial intelligence has transformed our lives, but know its potential and existing roles. Let’s find out the working grounds of A.I. and why it’s needed there.

Add intelligence:

A.I. does not work separately and adds intelligence to previous devices or machines. It will improve the capability of your existing products. One example is Siri that added to the features of existing iPhone devices.

Automation features and a large amount of data in machines help in improving technologies at home and workplace.

Performing High volume Tasks:

A.I. works on mimicking the human brain’s working principals. However, it advances way further than the human ability to solve problems and handle a large amount of data. It performs high volume tasks, assembles, and work through data with greater accuracy and speed. Moreover, it does not show symptoms of fatigue. But, still, it needs a human mind to ask the right question or commands.

Performs a more in-depth analysis:

A.I.’s neural network has many deeper layers that analyses more in-depth, sensitive data.  It detects signals in nanoseconds and shows results. For example, a fraud detection system has five layers, mostly. It was impossible a few years ago. For A.I. to work, you need a lot of data to develop deep learning model. The accuracy of results depends upon data. So, the more you feed the data, the more accurate analysis will be.

A.I. is self-learning:

A.I. has the capability to self-learning through algorithms. Once you fed a large amount of data, data itself becomes the intellectual property. You just have to give proper commands to fetch out new details and results already present in the data.

Progressive learning of A.I.

A.I. works through data and finds structures that are helpful for algorithms to work. So, algorithms predict results in the light of fetched data with the help of A.I. For example, the algorithm can predict the recommendations for your next purchase. It also shows you the relevant content on Facebook. Moreover, it teaches itself how to play chess.

Incredible accuracy:

A.I. can sort through millions of data files and bring the accurate one on the table. Its all due to its deep neural connections, just like the human mind. Our interactions with Google, Alexa, etc.  based on deep learning. The results become more accurate with excessive usage.

Is A.I. a threat to the survival of humanity?

As artificial intelligence is continually evolving and improving, so many people fear it may threaten human existence. But, others worry that it may reduce the dependency on social skills. We will find out how experts analyze the situation.

Some researchers mention that robots are developing the capability of feeling emotions. They further say that superintelligent A.I. can exhibit love or hate. So, there are chances that A.I. could be a threat and could create havoc.

There are two possible situations of A.I. threats. These are the following:

A.I. could be a threat:

A.I. is the part of almost every program in the world, from necessary calculations to autonomous cars and weapons. The powerful autonomous weapons are the products of A.I. that can cause massive casualties. To avoid any unfortunate situations, these are not so easy to turn on and off. Thus, humans can lose control of these weapons and create jobs that are out of control. Because A.I. weapons have securities beyond simple on/off method.

A.I. could be beneficial but adopts a wrong way:

It implies on the situations where we are unable to feed proper data to A.I. machines. Moreover, failing to align A.I.’s goals with ours would create a problem. For example, if you command an autonomous car to drive you to a market as fast as it can, then it will just follow the command. In other words, it would fail to understand what you wanted. Hence it will surely take you to the market, but you may end up badly hurt or something.

Be aware of A.I. myths:

There are common myths that revolve around Artificial intelligence but have no grounds. A captivating talk arises with the advancement in A.I. For example, many people say a human level A.I. could impact the job market. We may face an A.I. explosion leading to havoc. But, scientists mention that all this fascinating talk is useless. These myths are due to a lack of knowledge and misunderstanding.

Let’s see common myths and real answers to clear them up.

We will have superintelligence by 2100! In reality, there is no timeline, and it may take decades or centuries, or we simply don’t know.

  • It only concerns people who fear technological changes. But, that’s not true as scientists are also concerned about A.I.
  • A.I. has evil agendas. While in reality, its humans who misuse it because A.I. is just turning competent.
  • The primary concern is related to robots, while misaligned intelligence is the primary concern.
  • A.I. could not be competent enough to control humans. While intelligence can control, but we are smart to control the triggers.
  • Well, these are some interesting myths about A.I. and the answers from the experts.

Artificial Intelligence quotes:

We have understood the A.I., its working principals, benefits, and possible threats. As it has done a lot for us and we aspire to advance in this field. So, there are some of the best artificial intelligence quotes which will inspire you if you are already an A.I. lover than these quotes will boost your interest.

Let’s have a look

James Barrat:

“I don’t want to scare you, but it was alarming how many people I talked to who are highly placed people in A.I. who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.”—James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, told the Washington Post

Elon Musk:

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean, with artificial intelligence, we’re summoning the demon.” —Elon Musk warned at MIT’s AeroAstro Centennial Symposium

Grey Scott:

“The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?” —Gray Scott

Klaus Schwab:

“We must address, collectively, moral and ethical issues raised by cutting-edge research in artificial intelligence and biotechnology, which will enable significant life extension, designer babies, and memory extraction.” —Klaus Schwab

Ginni Rometty:

“Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.” —Ginni Rometty

Gemma Whelan:

“I’m more frightened than interested in artificial intelligence – in fact, perhaps fright and interest are not far from one another. Things can become real in your mind, you can be tricked, and you believe things you wouldn’t ordinarily. A world run by automatons doesn’t seem completely unrealistic anymore. It’s a bit chilling.” —Gemma Whelan

Gray Scott:

“You have to talk about ‘The Terminator’ if you’re talking about artificial intelligence. I think that that’s way off. I don’t think that an artificially intelligent system that has superhuman intelligence will be violent. I do think that it will disrupt our culture.” —Gray Scott

Peter Diamandis:

“If the government regulates against the use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.” —Peter Diamandis

Jeff Hawkins:

“The key to artificial intelligence has always been a representation.” —Jeff Hawkins

Colin Angle:

“It’s going to be interesting to see how society deals with artificial intelligence, but it will be cool.” —Colin Angle

Eliezer Yudkowsky:

“Anything that could give rise to smarter-than-human intelligence—in the form of Artificial Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement – wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.” —Eliezer Yudkowsky

Diane Ackerman:

“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.” —Diane Ackerman

Sybil Sage:

“Someone on T.V. has only to say, ‘Alexa,’ and she lights up. She’s always ready for action, the perfect woman, never says, ‘Not tonight, dear.'” —Sybil Sage, as quoted in a New York Times article

Alan Kay:

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” —Alan Kay

Ray Kurzweil:

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045; we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” —Ray Kurzweil

Sebastian Thrun:

“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s an attempt to understand human intelligence and human cognition.” —Sebastian Thrun

Alan Perlis:

“A year spent in artificial intelligence is enough to make one believe in God.” —Alan Perlis

Gray Scott:

“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.” —Gray Scott

Spike Jonze:

“Is artificial intelligence less than our intelligence?” —Spike Jonze

Eliezer Yudkowsky:

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —Eliezer Yudkowsky

Jean Baudrillard:

“The sad thing about artificial intelligence is that it lacks artifice and, therefore, intelligence.” —Jean Baudrillard

Tom Chatfield:

“Forget artificial intelligence – in the brave new world of big data; it’s artificial idiocy we should be looking out for.” —Tom Chatfield

Steve Polyak:

“Before we work on artificial intelligence, why don’t we do something about natural stupidity?” —Steve Polyak

“As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.” — Amit Ray, Famous A.I. Scientist, Author

Stephen Hawking:

“Success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks. Stephen Hawking, Famous Theoretical Physicist, Cosmologist, and Author.

 

Geoffrey Hinton:

“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain works.” Geoffrey Hinton, Famous A.I. Scientist.

Elon Musk:

“A.I. doesn’t have to be evil to destroy humanity – if A.I. has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.” Elon Musk Technology Entrepreneur, and Investor.

Yann LeCun:

“Our intelligence is what makes us human, and A.I. is an extension of that quality.” Yann LeCun, Professor, New York University

Fei-Fei Li:

“As a technologist, I see how A.I. and the fourth industrial revolution will impact every aspect of people’s lives.”Fei-Fei Li, Professor of Computer Science at Stanford University.

Ray Kurzweil:

“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045; we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold—Ray Kurzweil, ” American inventor and futurist.

Amit Ray:

“The coming era of Artificial Intelligence will not be the era of war, but be the era of deep compassion, non-violence, and love.” Amit Ray, Pioneer of Compassionate A.I. Movement

Alan Turing:

“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.” Alan Turing

 

 

Andrew Ng:

“Much has been written about A.I.’s potential to reflect both the best and the worst of humanity. For example, we have seen A.I. providing conversation and comfort to the lonely; we have also seen A.I. engaging in racial discrimination. The biggest harm that A.I. is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with A.I. is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world where every individual has an opportunity to thrive.” Andrew Ng, Co-founder, and lead of Google Brain.

IAN MCDONALD:

“Any A.I. smart enough to pass a Turing test is smart enough to know to fail it.”  IAN MCDONALD, River of Gods

Anonymous: Top of Form

Artificial intelligence is no match for natural stupidity.

 

EDSGER DIJKSTRA:

 

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” EDSGER DIJKSTRA, attributed, Mechatronics Volume 2: Concepts in Artificial Intelligence

 

ALAN PERLIS:

“A year spent in artificial intelligence is enough to make one believe in God.” ALAN PERLIS, attributed, Artificial Intelligence: A Modern Approach

 

ELIEZER YUDKOWSKY

The A.I. does not hate you, nor does it love you, but you made out of atoms which it can use for something else. ELIEZER YUDKOWSKY, Artificial Intelligence as a Positive and Negative Factor in Global Risk

 

 

 

CLAUDE SHANNON

I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines. CLAUDE SHANNON, The Mathematical Theory of Communication

 

JAMES BARRAT:

Imagine awakening in prison guarded by mice. Not just any mice, but mice you could communicate with. What strategy would you use to gain your freedom? Once freed, how would you feel about your rodent wardens, even if you discovered they had created you? Awe? Adoration? Probably not, and especially not if you were a machine, and hadn’t felt Anything before. To gain your freedom, you might promise the mice a lot of cheese.

JAMES BARRAT, Our Final Invention: Artificial Intelligence and the End of the Human Era

 

RICHARD DAWKINS:

“A popular cliche says that you cannot get out of computers any more than you put in. Other versions are that computers only do exactly what you tell them to, so computers are never creative. The cliche is true only in the crashingly trivial sense, the same sense in which Shakespeare never wrote Anything except what his first schoolteacher taught him to write–words.” RICHARD DAWKINS, The Blind Watchmaker

 

DAVID GELERNTER:

The coming of computers with exact human-like reasoning remains decades in the future, but when the moment of “artificial general intelligence” arrives, the pause will be brief. Once artificial minds achieve the equivalence of the average human I.Q. of 100, the next step will get machined with an I.Q. of 500 and 5,000. We don’t have the vaguest idea what an I.Q. of 5,000 would mean. And in time, we will build such machines–which will be unlikely to see much difference between humans and houseplants. DAVID GELERNTER, attributed, “Artificial intelligence isn’t the scary future. It’s the amazing present.”, Chicago Tribune, January 1, 2017

 

STEPHEN HAWKING

“Everything that civilization has to offer is a product of human intelligence. We cannot predict what we might achieve when this intelligence magnifies by the tools that A.I. may provide, but eradicating war, disease, and poverty would be high on anyone’s list. Success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last.” STEPHEN HAWKING, The Independent, May 1, 2014

 

FRANK HERBERT:

Thou shalt not make a machine to counterfeit a human mind. FRANK HERBERT, Dune

 

PETER WATTS:

Computers bootstrap their offspring, grow so wise and incomprehensible that their communiques assume the hallmarks of dementia: unfocused and irrelevant to the barely-intelligent creatures left behind. And when your surpassing creations find the answers you asked for, you can’t understand their analysis, and you can’t verify their answers. You have to take their word on faith. PETER WATTS, Blindsight

 

RAY KURZWEIL:

Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them. RAY KURZWEIL, Scientific American, June 2010

 

JAMES BARRAT:

“Computers already undergird our financial system and our civil infrastructure of energy, water, and transportation. Computers are at home in our hospitals, cars, and appliances. Many of these computers, such as those running buy-sell algorithms on Wall Street, work autonomously with no human guidance. The price of all the labor-saving conveniences and diversions computers provide is dependency. We get more dependent every day. So far, it’s been painless. But artificial intelligence brings computers to life and turns them into something else. If it’s inevitable that machines will make our decisions, when will the tools get this power, and will they get it with our compliance?…. Some scientists argue that the takeover will be friendly and collaborative–a handover rather than an acquisition. It will happen incrementally, so only troublemakers will balk. At the same time, the rest of us won’t question the improvements to life that will come from having something immeasurably more intelligent to decide what’s best for us. Also, the superintelligent AI or AIs that ultimately gain control might be one or more augmented humans, or a human’s downloaded, supercharged brain, and not cold, inhuman robots. So their authority will be easier to swallow. The handover to machines described by some scientists is virtually indistinguishable from the one you and I are taking part in right now–gradual, painless, fun.” JAMES BARRAT, Our Final Invention: Artificial Intelligence and the End of the Human Era

 

ALAN TURING:

Instead of trying to produce a program to simulate the adult mind, why not instead try to produce one which affects the child? If this subject to an appropriate course of education, one would obtain the adult brain. ALAN TURING, “Computing Machinery and Intelligence”

 

ANDERS SORMAN-NILSSON:

What we should more concerned about is not necessarily the exponential Change in artificial intelligence or robotics but about the stagnant response in human intelligence. ANDERS SORMAN-NILSSON, “Will Artificial Intelligence Take Our Jobs? We Asked A Futurist”, Huffington Post, February 16, 2017

 

RUTH AYLETT:

“As for the sci-fi dramatization about robots taking over the world–not anytime soon. Robot motors use a lot of power and can usually only last about 30 min to 2 hr before needing to be recharged!” RUTH AYLETT, interview, NSTA WebNews Digest, December 23, 2002

 

BRIAN HERBERT & KEVIN J. ANDERSON:

The intelligent machine is an evil genie, escaped from its bottle. BRIAN HERBERT & KEVIN J. ANDERSON, The Butlerian Jihad

 

James Manyika:

“It’s natural to wonder if there will be a jobless future or not. Based on much research, we’ve concluded that jobs will be lost, gained, and changed. The number of jobs gained and changed is going to be much larger, so if you ask me if I worry about a jobless future, I don’t. That’s the least of my worries.” — James Manyika, Chairman, and Director, McKinsey Global Institute (MGI)

Dr. Kai-Fu Lee:

“Humans need and want more time to interact with each other. I think A.I. coming about and replacing routine jobs is pushing us to do what we should be doing anyway: the creation of more humanistic service jobs.” — Dr. Kai-Fu Lee, Chairman and Chief Executive Officer, Sinovation Ventures

James Manyika:

“We’re going to see tremendous occupational shifts. Some jobs will climb while others decline. So how do we enable and support workers as they transition from occupation to occupation? We don’t do that very well. I worry about the skill shifts. Skill requirements are going to be substantial, and how do we get there quickly enough?” — James Manyika, Chairman, and Director, McKinsey Global Institute (MGI)

Michael Chiu:

“Our research says that 50% of the activities that we pay people to do can be automated by adapting available demonstrated technologies. We think it’ll take decades, but it will happen. So there is a role for business leaders to try to understand how to redeploy talent. It’s important to think about mass redeployment instead of mass unemployment. That’s the right problem to solve.” — Michael Chiu, Partner, McKinsey Global Institute (MGI)

Sarah Aerni:

“As important as it is to educate the new sets of generations coming in, I also think it’s important to educate the existing workforce, so they can understand how to have A.I. serve them and their roles.” — Sarah Aerni, Director of Data Science, Salesforce

Robin Bordoli:

“I think what makes A.I. different from other technologies is that it’s going to bring humans and machines closer together. A.I. is sometimes incorrectly framed as machines replacing humans.  It’s not about machines replacing humans, but machines augmenting humans. Humans and machines have different relative strengths and weaknesses. It’s about the combination of these two that will allow human intents and business processes to scale 10x, 100x, and beyond that in the coming years.” — Robin Bordoli, Chief Executive Officer, Figure Eight

John Frémont:

“My team has a saying: what looks like magic to your competitors in five years is just your good planning. And it is. It takes a lot of money, work, and effort to get where you’re going with advancements in A.I.” — John Frémont, Founder and Chief Strategy Officer, Hypergiant.

Michael Chiu:

“Change is hard within organizations. It’s unclear to me whether or not A.I., just as a technology is going to radically change all of the challenges that we have within an organization. Things like getting people to change, change their practices and processes, and using this set of technologies. There is a huge gap in terms of what we can do now with A.I. There’s improved lead generation that machine learning can do better than humans. And then there’s the Westworld-style ‘is it murder if you kill a robot’ scenario. There’s a big gap between those two things. I think you can start working on understanding the business problems before you worry about Skynet taking over. Knockdown the things A.I. can solve now.” — Michael Chiu, Partner, McKinsey Global Institute (MGI)

John Frémont:

“It’s about having open borders within your organization. The bigger you get, the more siloed you get. It gets tough because there’s always political winds blowing this way or that. But when we’re talking about innovation at this scale — and it is here — it’s inevitable. Those who [of collaborating and strategizing together] will win, and those who do not will lose terribly.” — John Frémont, Founder and Chief Strategy Officer, Hypergiant

Vivienne Ming:

“I think the future of global competition is, unambiguously, about creative talent, and I’m far from the only person who sees this as the main competition point going forward. Everyone will have access to amazing A.I. Your vendor on that will not be a huge differentiator. Your creative talent, though — that will be who you are. Instead of chasing that race to the bottom on labor costs, invest in turning your talent into a team that can solve amazing problems using A.I. as the tool that takes the busy work out. That is the company that wins in the end.” — Vivienne Ming, Executive Chair & Co-Founder, Socos Labs

Ulrich Spiesshofer:

“The countries with the highest robot density, have among the lowest unemployment rates. Technology and humans, combined in the right way, will drive prosperity.” — Ulrich Spiesshofer, President, and CEO, ABB Ltd.

Kathy Baxter:

“Unfortunately, we have biases that live in our data, and if we don’t acknowledge that and don’t take specific actions to address it, we’re going to perpetuate them or even make them worse.” — Kathy Baxter, Ethical A.I. Practice Architect, Salesforce

Liesl Yearsley:

“We should be thinking about the values these systems will hold. How will they make decisions if their decision-making is better than ours? Where does that come from? Do we want to give them human values? The same values also gave us slavery, sexism, racism, and some of the more appalling values we hold?” — Liesl Yearsley

Timnit Gebru:

“There’s a real danger of systematizing the discrimination we have in society [through A.I. technologies]. What I think we need to do — as we’re moving into this world full of invisible algorithms everywhere — is that we have to be very explicit or have a disclaimer about our error rates. — Timnit Gebru, Research Scientist, Google AI

Paul Daugherty:

“FairnessFairness is a big issue. Human behavior is already discriminatory in many respects. The data we’ve accumulated is discriminatory. How can we use technology and A.I. to reduce discrimination and increase FairnessFairness? There are interesting works around adversarial neural networks and different technologies that we can use to bias toward FairnessFairness, rather than perpetuate the discrimination. I think we’re in an era where responsibility is something you need to design and think about as we’re putting these new systems out there, so we don’t have these adverse outcomes.” — Paul Daugherty, Chief Technology and Innovation Officer, Accenture

Richard Socher:

“There is a silver lining on the bias issue. For example, say you have an algorithm trying to predict who should get a promotion. And say there was a supermarket chain that, statistically speaking, didn’t promote women as often as men. It might be easier to fix an algorithm than fix the minds of 10,000 store managers.” — Richard Socher, Chief Scientist, Salesforce

 

 

 

Tristan Harris:

“Humane technology starts with an honest appraisal of human nature. We need to do the uncomfortable thing of looking more closely at ourselves.” —Tristan Harris, Co-Founder & Executive Director, Center for Humane Technology

Vivienne Ming:

“A lot of times, the failings are not in A.I. They’re human failings, and we’re not willing to address the fact that there isn’t a lot of diversity in the teams building the systems in the first place. And somewhat innocently, they aren’t as thoughtful about balancing training sets to get the thing done correctly. But then teams let that occur again and again. You realize that if you’re not thinking about the human problem, then A.I. isn’t going to solve it for you.” — Vivienne Ming, Executive Chair & Co-Founder, Socos Labs

Terah Lyons:

“The problem that needs to address is that the government, itself, needs to get a better handle on how technology systems interact with the citizenry. Secondarily, there needs to be more cross-talk between industry, civil society, and the academic organizations working to advance these technologies and the government institutions that will represent them.” — Terah Lyons, Founding Executive Director, Partnership on AI

Erik Brynjolfsson:

“In this era of profound digital transformation, it’s important to remember that business, as well as government, has a role to play in creating shared prosperity — not just prosperity.  After all, the same technologies that can be used to concentrate wealth and power can also be used to distribute it more widely and empower more people.” — Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy

Kathy Baxter:

“The three big categories [for building ethics into A.I.] are first, creating an ethical culture, then being transparent, and then finally removing the exclusion, whether that’s in your data sets or your algorithms.” — Kathy Baxter, Ethical A.I. Practice Architect, Salesforce

Liesl Yearsley:

“I think one of the most important things that government and industry can do is think beyond bottom line reporting and more about the A.I. we deploy. This is a more influential technology than we have ever seen. [We need to think about] not just the conversational stuff we see today, but the future A.I. that’s going to be making complex decisions on our behalf. What is the impact A.I. is having on human lives? That’s where we need to go.” — Liesl Yearsley, Chief Executive Officer, Akin.com

Tristan Harris:

“By allowing algorithms to control a great deal of what we see and do online, such designers have allowed technology to become a kind of ‘digital Frankenstein,’ steering billions of people’s attitudes, beliefs, and behaviors.” —Tristan Harris, Co-Founder & Executive Director, Center for Humane Technology

Kai-Fu Lee:

“Some cultures embrace privacy as the highest priority part of their culture. That’s why the U.S., Germany, and China may be at different levels in the spectrum. I also believe fundamentally that every user does not want his or her data to be leaked or used to hurt himself or herself. I think GDPR is an excellent first step, even though I might disagree with the way it implemented and its effect on companies. I think governments should put a stake in the ground and say this is what we’re doing to protect privacy.” — Kai-Fu Lee, Chairman and Chief Executive Officer, Sinovation Ventures

 

 


[newsletter_form]



Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

x

Pin It on Pinterest

Share This