Overview:

It’s been well over a year since I last wrote about how we can create folder structures on servers and networks whenever a new project is created in Vision. If you haven’t had chance to read those posts yet, I suggest you go back and read through them.

Since I first started toying with this concept about 2 years ago I have made a number of enhancements to this customization to make it more of a re-usable product for my clients. Many of those enhancements came from a client’s unique requirements. I’m still in the final testing stages with that client and plan to write more about those enhancements and some lessons I’ve learned, once that project is complete.

For now though I would like to talk about a topic that came up during the aforementioned project, but is very relevant to any SQL based automation tasks – Performance.

The Problem:

For the first couple of implementations of this utility, performance was not much of a concern because we were doing limited processing with our stored procedures and PowerShell scripting and we were generating a single folder at a time on the local network which used a DFS structure to replicate the folders to remote offices.

For my latest project we had a very different scenario. We have a great deal of processing logic taking place to determine which folder structure to create and where to generate the folders on one of 20+ different file servers in remote offices across the US.

With robust Vision servers the scripting process flow was reasonably fast however once the multi-level folder structures started to be generated on the remote servers we started to see a lag. From the second the workflow was initiated, the end-user could be waiting up to 30 seconds before the script completed and allowed the user to continue on in Vision. 30 seconds is by no means extreme given the task at hand but as anyone who has worked in IT can attest…users hate to wait any length of time!

How SQL Processes Scripts:

SQL is considered to be synchronous – that is to say that if you fire a function or stored procedure, it will not finish that process until all scripts or functions and steps that are called inside that initial process have completed.

So when we call a stored procedure from a Vision workflow, SQL will not let Vision go until it’s done and in this case it includes the PowerShell script that is creating the folders on those remote servers.

Asynchronous Processing:

So how can we get SQL to process our script asynchronously? In simple terms, how can we let our user fire the workflow (via a button or other means) and let them carry on with their work in Vision while the script runs in the background?

Starting with SQL 2005, the Service Broker was introduced as a messaging and queuing service between different SQL instances/servers. This gave the ability to have a message sent on one instance to trigger scripts or jobs on other instances or on the same instance.

There are abundant resources on the web if you search “SQL Service Broker” and I recommend you do if you want to know more about asynchronous processing for SQL.

In my research however it became clear to me that implementing the Service Broker for this project and for this folder creation utility in general, was overkill. There had to be a simpler way – and there was – using a SQL Agent job.

The way this is accomplished is like so:

1. We create a new table in our Vision database to hold parameters for the Utility. There should be a column for each unique parameter as well as a unique identifier field and a bit field we will use to flag processed calls of the Utility.

2. Next we create a new Stored Procedure which will be called from Vision and will receive our parameters. We then insert them into the new table. Every time the utility is called in Vision, a new row will be inserted with a unique ID, the parameters and a default but flag of “0”.

3. Now we create a SQL agent job which we schedule to run every 30 seconds. This job is a simple “While” loop that looks at the new table for the first record with a “0” flag and then passes the parameters in that record to a call to our original stored procedure. It then sets the record’s flag to “1” so that the record doesn’t get processed again.

Conclusion:

By using the SQL agent and a small amount of coding we can create a very simple workaround to the performance issues that come with synchronous processing.

The SQL agent job carries very little overhead and will process as many records as it finds in an unprocessed state each time it runs. Depending on how quickly the users need the folders to be accessible the run frequency could be greater than 30 seconds.

This same SQL agent based “offloading” of a SQL process could be applied anywhere performance is an issue or there is a need to do your heavy lifting as a background process.

Stay tuned for the next update where I will go into greater detail on how I have extended this popular utility!

If you have comments or questions drop us a line at [email protected]m or leave a comment.