SnapLogic Takes Time to Learn
I found this feature extremely useful because it allowed me to know exactly how far along the process was, and that it was, in fact, still running. Further, I did the same type of analysis on each component, as well as the connectors, and saw similar information. I already mentioned that I received an error when I ran the pipeline; however, among the new features is one called validation. This means that you dont have to wait until running the pipeline to catch your errors. Instead, you can validate your components as you create them.I then tied the output of this filter to a second CSV_Writer component. But this time, before running the entire pipeline, I clicked on the second CSV_Writer component, and clicked Validate, which resulted in the same message as before: Property File Name requires a value. The idea here is that I can catch the error now before adding more components, and well before running the pipeline. The next new feature under debugging is Data Tracing. This feature let me see the data as it flowed through each component, both before the component processes the data and after. To test this out, I chose a run option called Trace All. This started the pipeline running, and I clicked on my first CSV_Writer component and then on the Trace tab in the properties window, which quickly filled with data records as they came in. I was able to do this for all the components in my design, and see the data coming into that component and the data coming out. I checked the second CSV_Writers data, both input and output, and could see that it was receiving a subset of the original data (since the data was coming in through a filter). Theres also a copy to clipboard button next to each trace output so you can immediately grab the output and paste it into another app, such as Excel. Whats also cool about the tracing feature is if you have multiple, intermediate steps (such as filters and joining of different data components), you can see this intermediate data, not just the final data in your resulting tables or files. That way, if you have a problem such as a wrong filter set, you can check each components input and output until you find exactly where the problem is. The only downside I could see with SnapLogic is that theres a learning curve, which depends on your level of experience with products similar to this. SnapLogic encourages its customers to get on-site training. In my case, that consisted of a member of SnapLogics team taking me through the product, step-by-step, via an online meeting. My training lasted about an hour, and even after that, I found myself a bit lost at the beginning. I had to dig through the window that holds all the components to figure out which one was which, and it wasnt immediately apparent how to get to the properties window or how to choose the run option to include tracing information. However, after I found all that, and was more comfortable, I was able to build what I needed to in only a matter of a few minutes. Click here to view a related slide show on SnapLogic's new tool.
To try this out, I added a filter (a special component that takes incoming data, filters the data based on parameters you provide and sends out filtered records to the next component). I connected the original MySQL_Read component into this filter so that the filter would receive the same data the CSV_Writer component receives, simultaneously. That alone is a cool feature, although not new. The data from a single component can be pushed into more than one component so you can do simultaneous processing.