Procurement: Understanding Access Controls

Procurement: Understanding Access Controls

As we’ve discussed in earlier posts, many factors impact the data procurement process, making it the most nuanced stage in consumers’ data-buying journey. Access controls and rights are among the most important of these factors and need to be clearly understood by consuming organizations.

In this area, lessons can be learned from the financial information marketplace, where complexity and high price / value of data place a strong emphasis on getting rights management right, with substantial penalties for getting it wrong.

Access controls may be approached in three basic ways, defined by the usage criteria of the datasets or products in question. First, where data is to be consumed only internally by the buying entity, control and even ownership of the data is transferred to the consumer, with most suppliers comfortable for consumers to use the data in any way they see fit. 

The main complication here relates to whether the data has been purchased outright or is being consumed on a rental basis – typically a subscription. If it’s the latter, the consumer needs to understand what rights it has to the data once the subscription ends. Is the historical data received during the life of the subscription theirs to store and use freely after the commercial relationship with the supplier has ended? This must be clearly understood as part of the procurement process.

The second use-case is redistribution, wherein the consumer re-sells or otherwise forwards the datasets to third parties. This is a common model in financial services, where market data vendors and others receive information from data producers – like exchanges, index operators and news suppliers – and aggregate the datasets to create bespoke services. This is also the approach of data brokers.

A third, similar model entails adding value during this redistribution process. Here, the original data consumer may commingle third-party or proprietary data or perform some calculation on the original data set, creating a so-called derived data set that it then sells at a premium to final consumers. 

These redistribution models raise questions around access controls and rights that again need to be clearly understood by the consumer as part of the procurement process. Financial services provide a good illustration of the complexity. In many cases, market data suppliers have been forced to develop their own identification codes to access data relating to any given instrument, as the final consumers may not subscribe to the fee-liable identifiers used by exchanges and other data originators. This creates a cost to the redistributor, but also can result in issues around vendor lock-in as the final consumer builds internal data systems and applications that are reliant on these redistributors’ proprietary access codes.

Another potential pitfall relates to the fact that in traditional data sales, the selling organisation often is not technically capable of splitting off data into subsets to support more granular, targeted data services. As a result, it supplies the entire data set to the consumer, but imposes restrictions on usage in order to create different data products at different price points. This approach requires sellers to audit customers to ensure they are not using data that they shouldn’t be under the terms of their license. The dreaded data audit can add unnecessary tensions to the consumer-seller relationship, and can undermine the goodwill that helps engender long-term commercial arrangements.

In these instances, on-demand access or granular cataloging of data can help, by clearly identifying which subsets of the dataset are delivered to the consumer, simplifying the whole process and obviating the need for the data originator to support expensive audit functions. 

This kind of approach also applies to situations where individual data services or price points are defined by the age of the data within them. In financial services – as in many other industry segments – newer data is often considered more valuable than older data. Freshness certainly has value in trading, where real-time data is priced at a premium.

When the consumer gets access to the data it has implications for pricing and distribution rights. For example, many stock exchanges delay distribution of data to create a premium opportunity for real-time data. Many exchanges have tiered pricing based on time of access, with lower cost options introducing delays, whether of minutes or hours. By creating different price points, this approach can reduce barriers to entry by making data sets available to consumers who can’t afford premium products, thereby maximising revenues to data producers.

The point here for the consumer, though, is that access timing often dictates price, which has implications for the procurement process. Understanding this dynamic is essential if the consumer is to put in place a comprehensive set of procurement steps that allows them to fully realize the value of the data they are acquiring.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on email
Email
Share on print
Print
Scroll to Top