SnapLogic Brings Real-Time Data-Flow Analysis to Visual App Design Tool

 
 
By Jeff Cogswell  |  Posted 2012-04-30 Email Print this article Print
 
 
 
 
 
 
 

Now with real-time data flow analysis, SnapLogic aids data-integration projects with a subscription-based service and connecting "Snaps." SnapLogic is sold as a subscription and varies in cost, based on the amount of data throughput.

The SnapLogic Cloud Integration Platform Spring 12 release adds real-time data-flow analysis, component validation and debug tracing to the data-connection platform. I recommend this tool if you need to do regular data processing and data conversion.

eWEEK Labs tests showed that the SnapLogic Spring 12 release proved capable of snapping together on-premise or cloud-based applications and data sources using €œsnaps€ representing the different types of applications and data sources, such as MySQL tables, comma-separated values (CSV) files, and even data-join operations.

IT managers controlling large data centers that need to process and move large amounts of data should add SnapLogic Spring 12 to their short list of products. SnapLogic is sold as a subscription and varies in cost, based on the amount of data throughput. Snaps range in price from no-cost to just under $10,000.

To try out the new features, I used the SnapLogic Designer to create a simple connection between a MySQL table and a CSV file. As I used the SnapLogic Designer, I was automatically provided with a choice of valid snaps, or €œcomponents€ as they€™re also called, for each operation and for each table. For example, a table called Customer could have a component for reading from Customer, one for inserting data into Customer, one for deleting data from Customer, one for looking up data from Customer and one updating Customer.

Finally, a component could perform what SnapLogic calls an €œupsert€ operation that combines an update with an insert action. These components can be dropped on the canvas, setting the stage for operations on that particular table. It is possible to start with the lower-level database components for more complex operations.

I chose a Read operation on a particular MySQL table. I then added a CSV_Writer component, which saves data to a flat file in comma-separated format. I connected the two components so that I could read data from the MySQL table and push it into the CSV_Writer component, which would, in turn, save the data it receives to a file that I could later open in Excel.

That€™s the general approach to SnapLogic: The data flows from one component to the next, and each component processes the data in a way that you specify through the component€™s configuration. I chose the fields to read from the MySQL table, and then, when I created the connection, the CSV_Writer automatically picked up those fields and their names by default. I could then rename those fields so they would appear differently in the final file, but I decided to leave them the same.

Then, I ran it by clicking the Run button. But an error message popped up, telling me I forgot to give my CSV file a name. So I clicked on the component, and at the bottom of the screen in the properties, I typed in a file name. Then I ran it again. This time, the operation succeeded. Done deal, and I had a CSV file with the data in it.

To program this operation manually, I would have to write a script that connects to the table, grabs the data and writes it to a file. But it took only seconds to drag each component on the canvas and connect them, and then a few more seconds to verify the field names I wanted and set the file name.

As the pipeline was running, I was able to test out one of the new features, real-time data analysis. I floated the mouse over the components and saw a pop-up window open with some statistics about the data as it€™s flowing through the particular component. These statistics included the number of records coming through the component, the records processed per second, CPU utilization and wait time. The statistics are live, and I could see the numbers updating continuously as my pipeline ran.



 
 
 
 
Jeff Cogswell is the author of Designing Highly Useable Software (http://www.amazon.com/dp/0782143016) among other books and is the owner/operator of CogsMedia Training and Consulting.Currently Jeff is a senior editor with Ziff Davis Enterprise. Prior to joining Ziff, he spent about 15 years as a software engineer, working on Windows and Unix systems, mastering C++, PHP, and ASP.NET development. He has written over a dozen books.
 
 
 
 
 
 
 

Submit a Comment

Loading Comments...
 
Manage your Newsletters: Login   Register My Newsletters























 
 
 
 
 
 
 
 
 
 
 
Rocket Fuel