Before becoming a software engineer, our university president spoke before graduating students during my college time. After many years, I still remember the main idea given by prof Tadeusiewicz: “using a rational approach, we understand particular fields in science already well, yet a lot of unexplored and possibly valuable discoveries are at the junction of different fields”. It was concise and worthwhile for a young student to investigate this interpretation deeper. Computer science alone gives an immense opportunity to create and explore different areas, and after adding combinations from various fields, it became even more fascinating. From a business perspective, software engineering digitalises existing businesses and creates new niches. Transforming existing businesses takes a common-sense approach, and new territory requires a solid mental model to define what is worthwhile and not.
As a software engineer for more than a decade, I have learned all the intricacies involved in building a well-designed product from scratch. At the same time, my main focus was always “what” – what the product I’m working on at the moment should do. Now, being a founder requires a different, more holistic approach. “What” is still essential, yet I have to focus on “why” and “how” first – why a product is needed, the purpose, how to validate it, and make it worthwhile.
“Why would a business need our product?
“Why would investment professionals use it?”
Possible answers would be: “to automate existing processes”, or “to make current operations more efficient”, or to “extract valuable insights from the insurmountable amount of information”.
After distilling the “why”, let’s proceed with “how” – the second most important question. Programming is about creating rules for a computer to define what to accomplish. Can a machine do it by itself or be intelligent enough to develop such rules? A field that focuses on this problem is Artificial Intelligence (AI). Studies on AI started decades ago but only really started to gain traction in recent years as computing power and resources doubled and tripled while costs came down with economies of scale. Machine Learning (ML) became the new wave about five-six years ago, combined with big data. We can now mine text, images, audio, video, in real-time and it transformed the analysis of language and image recognition to levels unachievable before. Natural language processing (NLP) is only a tiny part of ML, there is also predictive analytics, anomaly detection, segmentation, and the list keeps growing. I noticed a bunch of “how” questions that were floated up in the finance industry:
“How can machine learning help fund managers conduct trading more efficiently?“
“How can machine learning be used to make trading or valuation predictions?
“How can machine learning be trained with reliable datasets?
After defining “why” and “how”, we could focus on “what”. Aprimerose has developed a set of tools to monitor social networks. It uses deep learning to process, analyse and generate insights from speech. The vast amount of information from social networks allows us to interpret and synthesise attributes such as impressions, likes, comments, and sentiments, all within our platform.
How is this tool valuable, and why would businesses use it? Take, for instance, a four-hour video on Youtube of a critical Senate hearing. We would analyse this video, extract all the sounds, process the transcription, and end up with a summary. This summary allows us to measure different sentiments (e.g. bearish or bullish, hawkish or dovish) and compare them with the historical data from the previous readings. We store transcripted versions of the video for future longitudinal analysis. In short, speech recognition tools save time and resources needed to process a long and vital video and distill important meta information, such as sentiment.
A more complex use case would be to utilise video streams from a critical meeting to analyse human emotions as a risk analysis. The models can be used to recognise micro-expressions and compare the current mood of the speaker with historical records of the same person. Human memory is not ideal, and ML has a significant advantage over a trained psychologist or profiler by having access to history recorded as video and audio on demand. Initially, it would require a learning phase to label different human emotions to help the machine recognise them. Subsequently, the training set feeds into the model and achieves higher accuracy with each iteration.
Finding correlations between publicly available data and our data is another use case. Understanding the ‘crowd’ sentiment quickly and efficiently allows firms to allocate investments strategically. Correlations could be short-term based, e.g. spike in usage of a particular asset, or long-term based, e.g. rotating allocation between stocks, currencies, bonds, private equity. The public nature of the data would give an equal opportunity to such correlation finders so that they would compete, and no single solution would not work for too long by definition because it would become outdated. We need to train and retrain such tools constantly to reflect current changes. The self-feeding machinery of ingesting publicly available data could become a well-developed ecosystem in which we derive insights more efficiently. An actual use case would be to monitor a hawkish or dovish sentiment during Fed meetings, find correlations between inflation spreads and devise strategies to allocate portfolios accordingly. Big PE players like Blackrock already have many tools to mine data and derive insights.
Correlations from walled gardens, especially for regulated instruments, accessible for accredited or sophisticated investors, are different. Segmentation of the investor set is more difficult due to a less diverse group, yet still possible. In addition, advertising in private equity is not the same as what big advertising agencies, like Facebook or Google, already cover. PE companies must keep information about their investors strictly confidential.
Decision making using ML has a few challenges. Data going into an ML model needs to be calibrated and moderated to ensure fairness and devoid of biases. Human interpretation is still essential to sift out false positives and noise. Our brain has evolved to find patterns, and sometimes we see signals where there is just noise. Another challenge is that if our model is too simple or there are not enough critical features to work with, the outcome might not be as valuable as we thought. In such cases, opponents of ML usability want to call it nothing more than statistics. In the XIX century, when statistics was making its first baby steps as a science, the initial premise was that if we know enough about the past, we should know enough to predict the future. Even in the XXI century, we can not predict the weather with high certainty even one week ahead. Zooming into the financial industry, we have an additional challenge that increases the uncertainty of our predictions – human behaviour is not always rational. As mentioned in the theory of reflexivity introduced by George Soros, sociology is an integral part of understanding economics, increasing the uncertainty in our predictions.
Over the last decade, the proliferation of research and interest in Artificial Intelligence has created many practical applications that make our day-to-day lives more convenient. The current wave of new Machine Learning tools combined with cheap cloud storage and computing allows analysing vast amounts of data such as speech, language, and even images at a few clicks of a button. Businesses would benefit significantly in predictive analytics, anomaly detection, and using both supervised and unsupervised algorithms for clustering and segmentation.