Despite the image they may conjure up, neural networks are not networks of computers that are coming together to simulate the human brain and slowly take over the world. There's no T-1000, at least not yet.
At their core, neural networks are today the most sophisticated form of artificial intelligence. Through a repetitive process referred to as deep learning, neural networks are designed and trained to find hidden patterns and underlying nonlinear mathematical relationships in massive data sets (like financial market data).
Neural networks are an obvious fit for the world of Wall Street, where there's no such thing as too much data or analytics that is too robust. Check the headlines and you'll realize quickly that AI's neural networks are on the brink of taking over Wall Street and transforming technical analysis as it's been practiced for decades. Neural Networks can also be found at work in the automotive and medical diagnostics industries, as they're frankly applicable any time a vast amount of data needs to be analyzed.
But how did this predictive science come into being? We'll do our best at breaking it down.
From most accounts, the origin of artificial intelligence stems from theoretical computer science models that were envisioned following the end of World War II. These models drew inspiration from research on the organization and interaction of neurons within the human brain.
Neural networks have their roots in early twentieth century computational neuroscience through the work of Louis Lapicque with his integrate-and-fire model of neuron interaction. Later research by cognitive psychologists and early computer scientists, in the late 1950s and early '60s, at Dartmouth, MIT and Carnegie Mellon University among other academic institutions, introduced the concept of 'thinking machines' that could solve discrete problems as well as or better than humans can.
This early breed between neuroscience and computer science is what set the current field of Artificial Intelligence and thinking machines (today's computers and cloud) into motion.
How Neural Networks Mimic Human Thought
Because thinking (as humans do it, at least) is a highly imprecise and a scattershot method of drawing conclusions from concrete observations, experts from neuroscientists to epistemologists have only had surface-level success at delineating a consistent model of how the human brain works.
Part of that success is the neuron model, which spurred the development of artificial neural networks. The neuron model essentially connects certain nodes with other nodes so that these connections result in the generation of conclusions that can be applied to similar circumstances.
Human thought, of course, contains far more subjectivity, and 'data points' are less well-defined. Plus, we still don't really know how and why certain areas of the human brain are accessed. These uncertainties are what has prevented artificial neural networks from being able to reach its ultimate state-i.e. behave like a human brain.
But this is not to say that neural networks aren't capable of reaching novel conclusions. It's just that the "thought process" underlying those conclusions is, in some respects, hidden within the architecture of a neural network that has been designed for a particular purpose, such as financial market forecasting.
Teaching Machines To Be Thinking Machines
The key appeal of a neural network is that it can be trained, or "taught," to perform certain functions. From that training, artificial neural networks accept the data it is fed (such as annual coffee yields in Brazil) and draws connections from that information to its store of trained data nodes, which can include any of the previous data it saw during training.
After drawing as many relevant connections from past data as possible, the network outputs a new data point based on the totality of relevant connections it made from the original input.
This is a simplified explanation, but it illustrates the intentional process underlying the opaque catch-all category of AI that neural networks can sometimes fall into.
Of course, the sheer volume of all the information fed into a neural network, and the near-infinite connections that can be made between large numbers of data nodes, means that relying on a neural network comes with a certain degree of faith. Faith that the data it was taught on is specific enough to produce a relevant output, but at the same time broad enough to not just be a memorization of patterns within the training data.
A Practical Example Of A Neural Network
Nearly 30 years ago—at a time practically no one else in the financial industry saw the practical application of AI to technical analysis and financial market forecasting—Louis B. Mendelsohn released VantagePoint, which utilized neural networks to predict global markets.
Today, those neural networks learn by sifting through extensive amounts of global market data over and over again to find patterns that can't be seen by the human eye or detected by traditional, rule-based, linear, modeling tools such as statistical correlation analysis. This repeated exposure during training of neural networks allows them to generalize and make forecasts involving related, but previously unseen, patterns in similar data in the future.
Its documented performance at forecasting trends and changes in trend direction of hundreds of global markets with up to 86 percent forecasting accuracy is a testament to the power of AI and its increasingly more important role in financial market analysis and trading.
VantagePoint is a content partner of Benzinga. For a free demo click here.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.