Exploring the Archives of Big Data

voice technology

Business strategy plan over ladder leading to success

The availability of large-scale amounts of on-demand computing power has put the extended ability in the hands of more organizations to investigate the patterns and correlations in their existing data sets that they may have been unable to discern previously. Access to high-powered computing has also enabled the wider employment of unstructured textual data sets in the effort to uncover relationships of variables that are less pervasive but highly indicative. Machine learning capabilities are central in this endeavor and can produce unexpected insights.

These techniques can address things that we saw as important during the crisis but are difficult to model – non-linearities and network analysis, for example. But the choice of a given technique will depend on the questions you face. It’s very good to have new, sophisticated tools available, but the risk is to develop black boxes, which cannot deliver meaningful messages. It is essential to explore these techniques, work on specific projects and, perhaps more importantly, to define exactly the question you want to answer. This exploratory work can be shared within and across institutions. Central banks want to see precisely what other central banks and authorities are doing in terms of big data projects.

Mercator Advisory Group recognizes that while the U.S. banking system is decentralized, the same process and approaches to unearth relationships between published public data and unstructured textual data is being leveraged by private and government research organizations. We expect policies to be more focused and directed in the coming times, as the level of input, and the direct and indirect variables may be more easily included, weighed and modelled.

Overview by Joseph Walent, Associate Director, Customer Interactions Advisory Service at Mercator Advisory Group

Read the full story here

Exit mobile version