Do you remember Yahoo Directory? Back when the internet was gaining popularity in the 90s, this was a “big thing.”
If you don’t recall, Yahoo Directory was essentially just a large collection of websites, categorized in one place (i.e., storage). At that time, having that collection in one place was considered “convenient” and “innovative.” Even though you could spend hours scanning through site after site to possibly discover one that fit what you were looking for.
Now we have Google—a search engine that helps perform that same discovery function, only in milliseconds, not hours. Plus, it gives context to your discovery, making results much more relevant.
Do you see how far we’ve come? Yet, I sit here amazed that financial institutions (FIs) have invested millions of dollars to create what is essentially their own form of Yahoo Directory for their data. Only these directories are called “data warehouses.”
I just have one question: It’s not the 90s anymore, so why are FIs taking a 90s approach to their data?!
How financial institutions got in this situation
First, I think it’s important to understand how we got here. For that, I want you to think back to high school. There was this thing called “peer pressure.” The psychological phenomenon that somehow convinced us to do things we might not normally do just because everyone else was doing it.
Fast forward to the present—a similar phenomenon still exists in business. Except we call it “market pressure” now, and it drives us to make certain business decisions we might otherwise decline or delay.
In recent years, specifically, banks have given into pressure surrounding Hadoop/Big Data, creating sophisticated ways of aggregating, centralizing, and storing their data in data warehouses. Their end goal was to reach customer-centric, data-driven nirvana.
The limiting factors of data storage
Data storage isn’t necessarily a bad thing in itself. There are lots of incredibly useful data banks have to collect—transaction data from card swipes, call center data, payment data, etc.
However, banks aren’t able to extract useful, timely insights from the data for two main reasons. First, much of the data (such as payment data) is unstructured, hard to classify and categorize, and is often siloed.
Second, banks’ perspectives on data is one of tunnel vision, where they are looking for a single aspect of a customer’s life; however, there are numerous aspects to a customer’s life to consider that are spread across multiple data points.
Third, the discovery process is manual, time-consuming, and limited to people with specific skill sets. In other words, only IT can access and use the data, when other functions like sales, marketing, and retail banking are the ones that really need it.
Even in cases where banks have put systems in place to retrieve information quickly, internal processes hinder them. Consider the typical process a bank goes through to make use of their data:
- Identify the business case—the sales/marketing team needs a very specific audience of customers.
- Validate the case—sales/marketing confirms the case is feasible/viable.
- Get acceptance from IT—IT reviews and accepts the use case (can take weeks or months).
- IT processes the data—IT runs the business case.
- Validate the results—sales/marketing and IT validate the results; if the results don’t justify the business case, then steps 1–5 are repeated.
- Apply the insights—sales/marketing develops, designs, and deploys a marketing or sales campaign using the data.
- Track results—sales/marketing must work with IT to track tangible results such as new account openings, cross-sells, product utilization, etc.
Even when this process goes smoothly, it can take months to execute. Meanwhile, the relevancy of the business case, along with potential results, often decrease as the days and weeks go by. In some situations, by the time the process concludes, the business case may be completely irrelevant and not worth further exploration.
Greater accessibility and contextualization of data are the way forward
Right now, banks are only working with a data warehouse (the directory), but they need to be using context to expedite their efforts and find the most relevant pieces of data (the search engine). Just how the search engine added context and accessibility to what was essentially storage, banks need to do the same with their data.
Data that sits in these huge data warehouses are unstructured or, at best, only partially structured. This makes it difficult and burdensome for anyone, especially those outside of the IT function, to query this data for specific business cases. Hence why there’s a lack of insights, despite housing such powerful data.
By contextualizing and categorizing this data through accurate data tagging, banks can add meaning to every transaction. Contextualization can make it easy for individuals across the organization to query specific behaviors and attributes.
For example, marketing wants to build an audience that may be in the market for a HELOC or HELOAN, so it finds customers that
- have a mortgage with their bank or a competitive bank;
- have transactions with home improvement stores like Home Depot, Lowes, and similar retailers; and
- does not have a HELOC or HELOAN.
By looking at these specific parameters, marketing can target an audience that elicits desired behaviors, develop an extremely relevant campaign, and achieve better results.
Still, contextualization is not the only aspect that banks need to address with their data. Accessibility by functions outside of IT is just as important. The people that need the insights should be able to access them without having to wait and depend on IT resources.
Banks need to invest in technology that makes their data readily available across all areas of the bank. In addition, the technology should make the data both easy to understand and easy to use by people of all skill sets and functions.
In overcoming these technology barriers, each function can be self-sufficient in extracting and using the data for their specific purposes, such as reducing risk, building campaigns, gaining competitive insights, evolving products, and innovating.
Many FIs have curbed to market pressure in creating a data infrastructure that is only a small foundational piece to becoming a truly data-driven institution. Their data strategy must extend beyond simply storing their data.
By contextualizing their data and making it accessible across the organization, they will be able to derive insights that will inform their business cases, significantly shorten their timeline for execution, and achieve a better ROI.
And I, for one, can’t wait to see that become a reality.
Interested in more on this subject? I provide greater detail about data contextualization in my recent podcast with Payments Journal.
Rob Heiser is CEO and co-founder of Segmint, Inc., a provider of data-driven marketing technology that securely activates enterprise data to intelligently deliver personalized engagements measured across all channels.