Skip to main content



Show Your Work

How well can artificial intelligence mimic human thought processes? A simple task for humans, such as identifying the color of a large square in a photo, is a daunting task for AI: what defines a square? what makes a square large? how do I identify the large square’s color? These conclusions seem automatic for us humans, but there must be a way for AI to draw the same conclusions. That’s where networks come in.

Artificial Intelligence is entirely based on networks-neural networks to be specific.  Albeit more complicated than the networks examined in class, these neural networks maintain the same core basic principles that are used in simpler networks, like a basic social network. Neural networks contain “neurons”, or nodes, and connections. Each connection between nodes represents a directional transfer of information. Through these connections, neural networks are able to distinguish features, such as in the square example. At first this idea of “learning” seems a bit daunting, so what if there was a way to understand the AI’s “thought process”, so to speak, to better understand what’s occurring?

That’s where MIT’s new neural network, the Transparency be Design network (TbD-net) comes in; it’s able to show its work-and with a great accuracy, too. By looking at how TbD-net thinks, researchers are able to remove unintended bias that sometimes is a result of machine learning. Although still a work in progress, this technology seems extremely promising in advancing the capabilities of artificial intelligence. The prevalence of networks in a more technical and less theoretical sense is intriguing, and I look forward to learning more about these types of networks.

Source: https://thenextweb.com/artificial-intelligence/2018/09/12/mit-taught-a-neural-network-how-to-show-its-work/

Comments

Leave a Reply

Blogging Calendar

September 2018
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930

Archives