AI

What Do Babies and AI Have in Common?

April 23, 2019
Machines can measure, but can they “eyeball” something?

Artificial Intelligence (AI) has become an enormous part of a whole range of industries. Automated machine learning, in a broad sense, has impacted all sorts of fields, from medical diagnosis all the way to online shopping. The volume of data which can be consumed and regurgitated in an organized, motivated way by machines is far, far more than the smartest, quickest human being. AI is also in other areas beyond corporate and scientific machine learning solutions. It is being put increasingly inside our homes our cars and our smart devices, usually as aids in completing jobs which humans are either slow or frequently mistaken over.

If you know anything about AI you’ve probably heard of DeepMind, one of the frontrunners in terms of AI development. After being acquired by Google in 2014 it were sucked up into the ex-Google conglomerate organization Alphabet. The company’s work has included a neural network that can play video games like a human can, as well as work to do with Turing machines and imitating human memory. The ever-present question for companies like DeepMind is how far they can take AI and whether machines can ever think like human beings. Whilst it’s not a current reality, companies like DeepMind work on it every day. So, here are some thoughts about the path to true AI.

Looking to Babies

When babies are born they represent humanity in its rawest form, how we are at root before we are molded by each other and the world around us. This is the point in human development which AI is looking to make progress on: developing human like software. Just like human beings, AI has to start small and build up bit by bit.

“By the time a child is 1.5 years old, they’re already completing tasks, such as relational reasoning (relating separate items together by hidden or surface similarities) which are the sorts of things which computers can’t yet do,” notes data analyst Brett Markins from Ukwritings and Australian help.

In order to graduate into anything more complex, anything any of us might consider to be similar to the AI in science fiction we’ve read or seen, companies have to crack these foundational issues and then use those building blocks to take the artificial intelligence into the next realm. It’s ironic, but all of the great science minds in the field will be trying to figure out how to get a computer to act like a baby.

DeepMind Development

DeepMind has made strides at the foundational stage, which is why we are able to look to them as some of the more likely candidates for unearthing the algorithms behind inspiring machines to think and act like humans. The company put out a paper detailing its own neural network module specifically for relational reasoning, the building block noted above.

Having made advances in getting machines to spot similarities based on diverse data input, such as size, location, color, shape, form, surface, et al., it is now putting together a neural network for adaptive recognition—meaning getting the machine to be able to do the same job in a diverse range of contexts. If this can be cracked, then you release some of the situational demands which are currently weighing down the AI and think about its application in diverse environments, such as the world at large.

Prediction Is Part of Thinking

One of the biggest challenges looking ahead, which DeepMind and others will face, relates to how much of human thinking involves predictions. Our brains handle the complex and vastly diverse range of input stimuli which it encounters by making predictions based on previous experiences. We do it every day to make decisions. It’s particularly important in fields that involve coordination based on exterior data. This includes playing sport and driving a car. You can’t drive without using some degree of prediction.

“The very concept of ‘eye-balling,’ is deeply human,” says Ophelia Rosenstein, tech blogger at  WritingAPaper and Boomessays. “Making a complex set of predictions based on a huge range of data is something which AI is extremely envious of.” DeepMind is busy at work in this regard, already releasing papers on the subject of a Visual Interaction Network, a neural network which will make predictions in a kinetic environment, like watching a ball get thrown.

Conclusion

The answer to the question is that, yes, it will probably be possible to make AI which thinks like a human. If you can break human reasoning into its smallest bits and then model AI after it on that level, you can build it up to great success.

Nora Mork is a tech journalist at Academized and Bigassignments. She shares her knowledge and experience by speaking at tech events and writing tech columns for magazines and blogs, such as Ox Essays.

About the Author

Nora Mork | Tech Journalist

Nora Mork is a tech journalist at Academized and Bigassignments. She shares her knowledge and experience by speaking at tech events and writing tech columns for magazines and blogs, such as Ox Essays.

Sponsored Recommendations

50 Years Old and Still Plenty of Drive

Dec. 12, 2024
After 50 years of service in a paper plant, an SEW-EURODRIVE K160 gear unit was checked. Some parts needed attention, but the gears remained pristine.

Explore the power of decentralized conveying

Dec. 12, 2024
Discover the flexible, efficient MOVI-C® Modular Automation System by SEW-EURODRIVE—engineered for quick startup and seamless operation in automation.

Goodbye Complexity, Hello MOVI-C

Dec. 12, 2024
MOVI-C® modular automation system – your one-stop-shop for every automation task. Simple, future-proof, with consulting and service worldwide.

Sawmill Automation: Going Where Direct-Stop and Hydraulic Technologies “Cant”

Aug. 29, 2024
Exploring the productivity and efficiency gains of outfitting a sawmill’s resaw line with VFDs, Ethernet and other automated electromechanical systems.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!