By Kumesh Aroomoogan, co-founder and CEO, Accern
We’ve been told for years how the prevalence of data will require an ever increasing number of data scientists. As our world has become more digital, more data has become available that can guide our decision making – creating opportunities for insights that were never before possible, while at the same time, requiring new expertise to make sense of it. Data is, after all, only useful if it can provide some actionable value. Without someone to interpret it, data alone is not very helpful.
But does everyone really need to know how to code to access the benefits of data? A short time ago, the answer was an obvious and resounding yes. As McKinsey famously predicted in their 2011 Big Data Report, the United States would need up to 190,000 data scientists and would likely face a shortage of capable and experienced talent. While it’s true that there is a deficit of people with analytical expertise, the extent of this shortage is likely not what was once expected.
Why might they not have seen this coming? The introduction and widespread adoption of no-code and low-code technology, which allows you to access the benefits of data while not having the deep data engineering expertise. Essentially, data scientists have gotten so good they can train models to be adapted and customized for nearly any purpose. As a result, the end users of data do not need to be the ones creating their own models. If everyone needed to have more than a cursory understanding of data, this wouldn’t be the case.
And this comes at an important time for the financial services industry. Alternative data has grown substantially in recent years, as technology now exists to analyze company performance in real-time – looking at metrics such as credit card spending, satellite imagery, Slack and Microsoft Teams usage volume, weather, and much more. New datasets have emerged that provide asset managers, investors, and insurers with a view into market performance that was not possible even 12 months ago. At Accern for example, we analyze more than 500 signals (consisting of news articles, blogs, SEC filings, and more) all around the world, every single second.
As data usage and expectations have become more sophisticated, demand for data that can provide that crucial edge has also intensified. Expectations for faster insights that are tailored to specific needs have become the norm, as many of us cannot imagine a world without the real-time insights we have grown accustomed to. For example, we no longer need to rely on quarterly financial statements to know how a company is doing. Real-time data exists that provides a window into foot traffic patterns, mobile app downloads, keyword usage, and a lot more. This allows us to know at any one time the likely health of a corporation so that we can make better predictions in their ability to meet financial targets.
Thankfully, as the increase in data usage has grown, the field has expanded so that data models can be created and pre-built for specific use cases. The best and brightest minds are now creating the tools that make the benefits of data and AI a reality for the masses. Data scientists are now working to make data more accessible, which only fuels demand further.
Will we need an ever-increasing number of coders and developers to create the tools that can help us benefit from the growing data opportunity? Likely not. Demand is certainly increasing, and we do face a shortage of talent, but my expectation is for the field of data science to adapt quicker than many of us anticipated. And, as a result of no-code and low-code technology, data scientists are enabling teams to solve their own respective industries’ biggest challenges.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.