Welcome!

Open Source Cloud Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski

Related Topics: Containers Expo Blog, Microservices Expo, Open Source Cloud, @CloudExpo

Containers Expo Blog: Article

Data Virtualization – The Quest for Agility

In the quest for agility, let’s not sweep productivity under the carpet

Yes, data virtualization is definitely about agility. We covered this from different angles in my previous article. Agility in this context is about accelerating the time-to-delivery of new critical data and reports that the business needs and trusts. It is also about an architecture that is flexible to changes in underlying data sources, without impacting the consuming applications. It's about doing data virtualization the right way so you can deliver new critical data and reports in days vs. months.

However, what about productivity? A discussion around agility cannot and must not exclude productivity. Productivity thus goes hand-in-hand with agility. Whether it is about the rate at which goods are produced or work is completed, or a measure of the efficiency of production, it's about doing something well and effectively. This begs a deeper conversation around productivity, both as a capability as well as a benefit. However, looks like somebody forgot to talk about this - or, did they?

According to the 2011 TDWI BI Benchmark Report - "By a sizable margin, development and testing is the predominant activity for BI/DW teams, showing that initiatives to build and expand BI systems are under way across the industry. Nearly 53 percent of a team's time is spent on development and testing, followed by maintenance and change management at 26 percent." If a majority of time is spent in development and testing, and if BI agility is the end goal, isn't increased productivity absolutely critical?

I think Heraclitus of Ephesus was talking of productivity in the context of data virtualization when he said, "a hidden connection is stronger than an obvious one." There is definitely a hidden connection here. However, productivity often gets brushed under the carpet. Not because it's not critical, but because data virtualization built-on data federation has its roots in SQL or XQuery. As we know, with manual coding, limited reuse, and unnecessary work, productivity simply goes out of the window.

Let's see how and where productivity fits in a data virtualization project. Such a project involves a discrete set of steps, starting from defining the data model to deploying the optimized solution. Like a piece of art, each step can be approached differently - either painfully, or by applying best practices that support productivity. Here's the typical life cycle of a data virtualization project, as prescribed by industry architects. By questioning how each step is undertaken, we can understand the impact on productivity:

  1. Model - define and represent back-end data sources as common business entities
  2. Access and Merge - federate data in real-time across several heterogeneous data sources
  3. Profile - analyze and identify issues in the federated data
  4. Transform - apply advanced transformations including data quality to federated data
  5. Reuse - rapidly and seamlessly repurpose the same virtual view for any application
  6. Move or Federate - reuse the virtual view for batch use cases
  7. Scale and Perform - leverage optimizations, caching and patterns (e.g., replication, ETL, etc.)

Let's start with model. The questions to ask here are - is there an existing data model? If yes, can you easily import the model? This means, you must be able to speak to and suck in a model from the numerous modeling tools in existence. If not, if you need to start from scratch, you must be able to jump-start the creation of a logical data model with a metadata-driven, graphical approach. Hopefully, this should be a business user friendly experience, as the business user knows the data the best. Correct?

Next, let's consider access and merge. Yes, it's the domain of data federation. It's about making many data sources look like one. However, in my discussion about why a one-trick pony won't cut it, I mentioned a recent blog by Forrester Research, Inc. It states that traditional BI approaches often fall short because BI hasn't fully empowered information workers, who still largely depend on IT. Let's ask the question - can business users also access and merge diverse data, directly, without help from IT?

After you federate data from several heterogeneous data sources, it's all over right? Wrong! That's where data virtualization built on data federation ends. To do data virtualization the right way, the show must go on, logically speaking. The same business user (not IT) must now be able to analyze not just the data sources, but also the integration logic - be it the source, the inline transformations, or the virtual target. And this is profiling of federated data - which must mean no staging and no further processing.

What follows next must be the logical progression of what needs to happen once the business user uncovers inconsistencies and inaccuracies. Yes, we need to apply advanced transformation logic that includes data quality and data masking, on the federated data, as it is in flight. This is where you must ask if there are pre-built libraries available that you can easily leverage in your canvas. I take it that we all realize that this means not having to manually code such logic, which will obviously limit any reuse.

You have defined a data model, federated data, profiled it in real-time and applied advanced transformations on the fly - all without IT. Yes, get some technical help to develop new transformations, if required. But look to do this graphically, in a metadata-driven way, where business and IT users collaborate instantly with role-based tools. Ask if you can take a virtual view and reuse it with just a few clicks. Check to see if could work with a few reusable objects instead of thousands of lines of SQL code.

We have discussed the steps involved in virtual data integration. Yes there is no physical data movement, and yes, it's magical. However, ask what you can do if you need to persist data for compliance reasons. Or, ask for the flexibility to also process by batch, when data volumes go beyond what an optimized and scalable data virtualization solution can handle. Don't just add to the problem with yet another tool. Ask if you can toggle between virtual and physical modes with the same solution.

Finally, as you prepare to deploy the virtual view, a fair question must be about performance. After all, we are talking about operating in a virtual mode. Ask about optimizations and get the skinny on caching. Try to go deep and find out how the engine has been built. Is it based on a proven, mature and scalable data integration platform or one that only does data federation with all its limitations? Also, don't forget to ask about if the same solution leverages change data capture and replication patterns.

If the solution can support the entire data virtualization life cycle discussed above, you are probably within reach of utopia - productivity and agility. The trick however, is to ask the tough questions - as each step not only shaves off many weeks from the process but also helps users become more efficient. Ah, this is beginning to sound like cutting the wait and waste in a process, as discussed in detail in the book Lean Integration. We've come full circle. But I think we just might have successfully rescued productivity from being swept under the carpet.

•   •   •

Don't forget to join me at Informatica World 2012, May 15-18 in Las Vegas, to learn the tips, tricks and best practices for using the Informatica Platform to maximize your return on big data, and get the scoop on the R&D innovations in our next release, Informatica 9.5. For more information and to register, visit www.informaticaworld.com.

More Stories By Ash Parikh

Ash Parikh is responsible for driving Informatica’s product strategy around real-time data integration and SOA. He has over 17 years of industry experience in driving product innovation and strategy at technology leaders such as Raining Data, Iopsis Software, BEA, Sun and PeopleSoft. Ash is a well-published industry expert in the field of SOA and distributed computing and is a regular presenter at leading industry technology events like XMLConference, OASIS Symposium, Delphi, AJAXWorld, and JavaOne. He has authored several technical articles in leading journals including DMReview, AlignJournal, XML Journal, JavaWorld, JavaPro, Web Services Journal, and ADT Magazine. He is the co-chair of the SDForum Web services SIG.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...