Monday, December 29, 2008

A Medical Graphic for Market Flow Analysis ???


New York Times article of what happens inside a cancer cell. This graphic struck me as being apropos to flow analysis of markets.

(The outer ring would be a heatmap of sector movement, the next inner ring would be the current volumes, the next inner rings might show trading movements from one sector to another,...)

http://www.nytimes.com/2008/12/25/science/25visual.html




©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Saturday, December 27, 2008

Updates from the Front

It has been difficult for me to get the proper motivation to blog. These past few weeks have been like a punch to the gut of anyone who is working in the financial services industry. Over the past few weeks, we have seen friends, colleagues and relationships dissolve, as the capital markets sector has undergone a painful consolidation. And as managers, it has been especially painful to deliver bad news to some of the very hard workers who we have collaborated with over the past several years.

I am hopeful that in adversity comes new opportunities for my former colleagues. The capital markets sector is not the be-all and end-all in jobs, and there are plenty of opportunities out there for good technical people. As I have said before here ... Always Be Coding. Keep your technical skills sharp. Get out of your offices, read blogs, go to MeetUps, start to code that great new app.

I want to thank my team ... SW, HH, FW, JW, PJ, and JR ... for taking the wisps of smoke and turning them into an application which we hope will signal a new class of applications in my firm for capturing, cleaning, analyzing, and visualizing real-time data flows.

I also want to thank the vendors who have made this year successful for the CEP team. We have given Coral8 much deserved grief at times, but we are confident that with the new technical management, they will deliver some great things to us in 2009.

On a personal note, in addition to heading up the CEP team, I have just taken over the role of Chief Architect and Chief Strategist for Equities (a title that reads better than it is), and I have a new team reporting into me. I have taken over the role of my former boss, but you won't find any gala press releases that trumpet this appointment.

Does this mean that my company will turn into a 100% Microsoft shop, with every application tied into CEP? No, not really.

The title of Chief Architect means a lot of different things in a lot of different companies. In my company, it means dividing my time into interesting things (engaging new vendors and technologies, doing POC's, and trying to convince the business to come up with the budget to follow through on the more worthwhile POC's), and more mundane things (like doing roadmaps, pursuing system retirements, trying to consolidate various efforts, etc). It also means hobnobbing with my fellow wizards in the other divisions (Fixed Income, Munis, Transaction Services, Retail Banking) and trying to overcome the traditional silos to come up with consolidated efforts and cost savings in these difficult times. I dare say that the words "Process Re-engineering" and "Cost Savings" are a lot more important than "Revenue Building" in these times.

It also means that I need to take a sobering look at Microsoft technologies. One of the reasons I was hired into my company was to help the former Chief Architect push Microsoft technologies throughout my firm. However, I like to think that I will not be a shill for any company. Microsoft will have to prove themselves to be worthy alongside the reams of Java infrastructure that we employ. I would also like to think that I can use my new powers for good instead of evil, and that I will be able to push Microsoft to pay more attention to the capital markets sector.


Who would have thought that in the span of one year, Merrill, Wachovia, Bear, Lehman, and WAMU would be gone, and GS and Morgan Stanley would be turned into bank holding companies?

I have no idea what 2009 will bring. All of the news reports indicate that 2009 will be as rough as 2008. But we all need to stay focused so that, when we come out of the muck and mire in (hopefully) 2010, we can all hit the ground running.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, December 14, 2008

Microsoft Surface

I haven't had much time to blog due to all of the end-of-year things going on at work and all of the turmoil in the financial services industry.

After the Waters USA 2008 conference last Monday, a few of us shuffled over to the Microsoft Technology Center on 6th Avenue and 51st Street. Joe was nice enough to invite us to see a demo of Microsoft Surface.

Surface is basically a giant touchscreen. I would say that it would find its greatest usage in a retail scenario where you would have a single app running all day. Customers would be attracted to the large footprint of the Surface and the compelling graphics that would be offered by the WPF-based apps that run on the Surface.

Some points about the Surface that are negatives for me:

1) Very large footprint. It is roughly the dimensions of a coffee table. The base is filled with cameras and a computer.

2) The surface of The Surface has to be parallel to the ground. You cannot angle the Surface in any way. You cannot place it perpendicular to the ground like a regular computer monitor. This make it very difficult to put on a trading floor.

3) You can only run one application on the Surface at a time. You cannot have multiple apps running simultaneously, and "drag and drop" between the two apps.

4) The surface is not pressure sensitive. Neither is there any kind of tactile feedback mechanism.

5) There is no concept of Z-order (ie: depth).

What you have with the Surface is a very large touchscreen that is designed for retail-based kiosk-type applications.

There are a catalog of "motions" that are available that you can use in your apps. For example, when you put your thumb and forefinger together, the Surface detects a "pinching motion". I imagine that your app is sent an OnPinch event, much like a MouseDown event is sent when you click a mouse. But, my limited imagination does not allow me to think of a good use for motions in a trading app.

I am trying to figure out how to use it on the trading floor. I don't think that it can be used effectively for a trading application. Where I think that it might be helpful is to encourage collaboration between traders, something the replace "The Hoot", but even that is a stretch.

If you can think of a good application for the Surface on a trading floor, please let me know.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Wednesday, December 03, 2008

Scott's Powershell provider for Coral8

Scott posted this message on the Coral8 user's group on LinkedIn. Look at his stuff and give him some feedback.

Coral8 & PowerShell

If you use Coral8 on Windows, and you use Powershell for other development and admin tasks, you may be interested in the powershell navigation provider and cmdlets which I've uploaded to

http://code.google.com/p/coral8shell/

The nav provider supports basic cd and ls commands for workspaces, applications, and streams.

For admin tasks, the following cmdlets have been created

Get-C8Status, Get-C8App, Add-C8App, Remove-C8App, Start-C8App, Stop-C8App


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Monday, November 24, 2008

Fidessa Fragmentation Index

http://fragmentation.fidessa.com

I have a feeling that this would be important to Aleri and Streambase. I welcome comments from both companies (Jon and Mark?) to explain if and how they might use it (Streambase with Smart Order Routing, Aleri with Liquidity Management).

In particular, would this affect any of the pragmatic work by Robert Almgren (who is an advisor to Streambase)?

©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Wednesday, November 19, 2008

Do not mix RFA_String and std::string in Reuters RFA !!!!!!!

Some days, it's good to Always Be Coding. Today, it wasn't.

I just spent an entire day chasing down a problem, trying to merge the Reuters OMM API into our MarketFeed-based app. Thanks to Apurva, I was led to a small paragraph in a Readme file that said:

Known Deficiencies
- Mixing of RFA_String and std::string in application interface and implementation

As soon as I got rid of the std::string references in the code, everything magically worked!



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, November 16, 2008

Reuters Market Data Performance Problems Under 64-bit Windows

Our CEP system has a process that reads market data from Reuters RDMS and publishes the ticks to Coral8. Everything runs fine when the market data reader is running on a 32-bit development machine. However, when we put the process onto our 64-bit Windows 2003 server, the market data process steadily consumes about 85% of one core.

We use the Reuters RFA API. This involves using the 32-bit C++ DLLs that RFA provides. Our market data process is mixed-mode between managed C++ and C#, and the unmanaged Reuters DLLs.

In order to get our market data process to run on 64-bit Windows, we need to use the CORFLAGS utility that Microsoft provides. The command-line argument to do this is:

CORFLAGS OurMarketDataReader.exe /32BIT+

The process runs under WOW64, which provides 32-bit emulation under 64-bit Windows systems.


The following MSDN article gives some warnings about WOW64 and performance:

WOW64 is important because it allows you to leverage most of your existing 32-bit code when performance and scalability are not a concern...... Another thing to keep in mind about WOW64 is that it is not designed for applications that require high-performance. At the very least, the WOW64 sub-system needs to extend 32-bit arguments to 64-bits, and truncate 64-bit return values to 32-bits. In the worst case, the WOW64 sub-system will need to make a kernel call, involving not only a transition to the kernel, but also a transition from the processor's 32-bit compatibility mode to its native 64-bit mode. Applications won't be able to scale very well when run under WOW64. For those applications that you would like to leave as 32-bit, test them under WOW64. If the performance is not meeting your expectations, you'll need to look at migrating the application to 64-bit.


Our market data process handles about 3000 ticks per second over about 8000 stocks and currency pairs. In the market data process, I put in 2 caches (it's actually a ring of caches) that the process switches between. Every tick that is read from RDMS is put into a cache (ticks are last-reliable), and every second, the cache is dumped to Coral8. While the cache is being dumped, the other cache takes over.

We are hoping that Reuters will provide either native 64-bit DLLs for RFA, or a native .NET version of RFA that does not involve wrapping the existing 32-bit RFA DLLs. This is the only way that we will be able to get decent performance on 64-bit Windows.

Fortunately, Brian Theodore is now head of the Reuters API group, and we have started to engage with him.

If Reuters can't deliver this to us, then we will probably need to requisition a 32-bit server that is simply devoted to running the market data process. That means a good amount of groveling by me to the Hardware group :-( It also means sending market data over the network to Coral8, which is an additional network hop that we can afford right now, but that we might not want to incur in the future.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Saturday, November 15, 2008

Followup to "Do you need a Commercial CEP System?"

Richard Bentley of Apama has a great post in which he details the various uses of Apama in a number of financial companies. Richard's post and Tim Bass's companion post are two additional ways of thinking about the viability of commercial CEP engines.

My previous post concluded that

1) If I had a large team of developers, AND I had a number of pre-written frameworks that provided input and output adapters, AND I had a relatively easy use case (ie: simple aggregations, simple pricing, etc), AND if there was no existing vendor solution, then I would prefer to write everything ourselves. (All four conditions need to be satisfied.) I would choose this path because it gives us the maximum amount of control over the engine.

2) If I had a small team of developers, and money was no object, and if there was no existing vendor solution, then I would purchase a CEP engine from a vendor. If money was tight, I would probably opt for Esper. I would have to balance my budget with the lost opportunity cost. If the opportunity cost was a lot greater, then I might go for a vertical CEP application, if one existed.

If I needed an algo trading application and a generc CEPengine , why not use Apama?

Apama has been in a sweet spot for a long time, and Richard Bentley has seen the fruits of this. They have offered a generic CEP solution for a while, but their claim to fame is that people like John Bates and Mark Palmer helped to create a trading-solution-in-a-box. When I hear people at my firm talk about Apama, they talk about it as if it were its own class of applications, much in the way people refer to Ketchup now instead of Catsup. They don't even know that Apama is related to CEP.


Where do I think the future of CEP lies? In building blocks that you can choose from in order to create applications. A risk management block, an algo trading block, a pricing block, a FIX processing block, a market surveillance block. These might be sold like Legos. You choose an algo trading block, which comes with the engine, pre-written code, a market data adapter of your choice, and a FIX adapter. You can snap in a risk management block. You can snap in a P&L block. All of these blocks come with source code so that you can tune them.

These pre-written blocks will have been written, tuned, and verified by the vendor. There would be less of a chance of a developer struggling with Streaming SQL.

This is something that I always thought would be done by an enterprising third party. However, the vendors themselves are moving in this direction. They are all trying to catch up to Apama in the financial arena, while trying to maintain their excellence in their offering of a general purpose CEP engine.

So far, I have selfishly focused on financial applications. However, there are many CEP applications in other industries, like gaming, transportation, and defense. All of them share the same characteristics. Monitor a bunch of real-time actions and warn when a certain condition is true. So, a generic surveillance block would be great. The surveillance block might come with a choice of adapters for the most popular sensors . (Tim Bass can tell me if I have been smoking crack here.)

It may be very difficult to come up with an "application block" that will solve all facets of a domain correctly. Can you come up with a risk management app block that can be used right out of the box? Probably not. However, a thoughtful architecture, combined with lots of market research, and combined with lot of extensibility points in the application block, will probably go a long way. In that sense, the vendors should look at how Microsoft architected their Enterprise Application Blocks.






©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Blogging about Layoffs

The New York Times has an interesting article about companies preemptively blogging about layoffs that are about to occur. It seems that some companies want to have the first shot at controlling the message before it is distorted by other bloggers and the general media.

You don't get much information through blogs when layoffs happen in Wall Street and The City. Sometimes, you will read a snippet or two on Here Is The City. But, there is not much.



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Do You Really Need a Commercial CEP System?

Almost every day, we ask ourselves if making the decision to use a commercial CEP system was worth it. If you have been following my blog for the past 14 months, you have traced my thought processes around choosing whether to build vs. buy, and my choice of vendor. And, almost every day, when doing my 30 minutes of elliptical, my mind wanders and I start reflecting on my decision and wondering if it was correct to buy a commercial CEP system.

Our system does not do any sophisticated pattern matching yet. Right now, we are doing a lot of aggregations, groupings, rollups, and simple comparison testing in order to generate alerts and real-time dashboard information.

As long-time blog readers know, we were able to get a working prototype of a very early version of our system up and running in a few days with hand-crafted C# code. But, that’s just us. We had a lot of the elements already in place that the commercial CEP systems offer. But, what if we didn’t have that framework to leverage?

Let’s say that I recently got laid off from my Wall Street company (a very real situation nowadays), and I and a few friends decided to start a small prop trading shop out of my basement. We are a 3 traders and a developer. We need to build ourselves a small algo or high-frequency trading system. What do we need?

We need real-time market data from Reuters or Bloomberg, or if I am interested in something really fast, I might try Activ. We need to persist ticks and results into a database, so we will need both a tick database (something like KDB or Vhayu) and a standard database like SQL Server or MySQL. We need feed handlers that can talk to the engine that maintains our models. We need a GUI so the traders can monitor the market and the orders. We need a FIX engine so that we can send orders out to the markets.

I give my developer explicit orders to get it done as fast as possible. I don’t want him spending time writing a beautifully crafted framework that solves a general problem. I want him to write the simplest thing possible. If we are using Bloomberg, I want him to use the BBAPI to get the market data and put it into the trading engine in the fastest possible way and with the lowest latency. If we have to aggregate something, I want him to take the data, sum it up, and stick it into some hash table that is indexed by symbol.

With all of my requirements in place, I know that it would take my tech guy 6 months to get us up and trading on this new system. It would cost us roughly $100K in development costs. However, our opportunity cost is huge, as we have not been able to put on some of the trades that we would have liked to.

What does it cost to purchase a commercial CEP system? Coral8 charges $15K per core for the primary machine. There are discounts for the HA (high availability) machines. Aleri charges $26K per core. When we met with Streambase in early 2007, they told us that is would take $500K to get us up and running (I don’t know if they have reduced their price.) If I have a 4 CPU dual-core machine, then it would cost $120K for Coral8 and $208K for Aleri. Plus, most companies charge a 15-20% annual maintenance fee.

What do I get for that amount?

I get basically a real-time matching engine. I get a high-level language that, if my traders bother to learn and don’t get themselves into trouble with, will let my traders “tweak” their models. I get a bunch of input and output adapters that will let me get information in and out of the engine. (I might have to pay a lot more for a Reuters or Bloomberg adapter, and that is only if my vendor has one.) I get an IDE that, along with the high-level language, I can use to develop the models.

What do I give up if I buy a commercial CEP engine? I give up the right to fix the internals of the engine if I find a bug. I am at the mercy of the vendor’s release cycle. If I find a bug, and the vendor tells me that it will be fixed in 3 months, when the next release comes out, I won’t be happy. 3 months is really 4 months, because initial major releases always have problems with them. I give up the right to add new features when I need them. I am living in fear that, in this fiscal environment, the VC’s might decide to pull all funding out from under the vendor’s feet.

On the plus side, I get to tap into the vendor’s customer support staff, thus freeing my programmer to do other things. I might get to interface with other users of the CEP system so I could learn best practices.

Now that I have my commercial CEP system, what now? I have to get my tech guy to learn a new language (the Streaming SQL variants that are offered by the CEP vendors are easy to learn, but as we have come to discover, if you don’t use it properly, you can get yourself into a big mess). If we don’t have a specific input adapter (Bloomberg) or output adapter (FIX) that the vendor provides, then my developer has to write it himself. My developer still has to program the models, but now we have to worry if the vendor’s language is powerful and flexible enough to support the programming of the model.

If we find a blocking bug in the CEP engine, or if we find a language feature that we desperately need, then we are at the mercy of the vendor.

On the other hand, the vendor has a highly tuned multi-threaded engine, so that’s a big chunk of development that we don’t have to do. The CEP engine has cool things that we need, like sliding windows and nested pattern matching.

It’s a big decision to make. If I work at a large company, and I have several different frameworks available to me, and I have 5 full-time developers, then I could see writing everything from scratch. If I was a small startup, and I could not afford the market data fees AND the CEP engine, then I might look to a free solution like Esper. If I was a 5-20 person shop, then my decision is tougher.

What would make the decision easier would be to see if a commercial CEP vendor has a vertical application that solved our needs. A trading system in a box. That's the direction that Apama has taken, and that Mark Palmer is trying to pursue with Streambase. Coral8 will follow that path with the hiring of Colin, and Aleri has its eyes on that direction too.,

Friday, November 14, 2008

Reuters Config Files - Where do they go?

If you are using the Reuters RFA 6.x APIs to consume market data, you know that there are three config files that RFA needs to find. These configuration files are

  • RFA.CFG
  • appendix_a
  • enumtype.def

Usually, if you are running a console or WinForms application, you put these config files in the same directory as your executable.

However, what if you are running a Windows Service that uses RFA?

After some experimentation, we found that you need to store the files in these places:

  • Windows 32 - Put the files in c:\WINNT\system32
  • Windows 64 - Put the files in c:\WINNT\SysWow64

A service or an executable that uses RFA (the Marketfeed API, not the OMM API) uses 32-bit DLLs. This is because Reuters does not yet have 64-bit DLLs for RFA. Therefore, you need to use CorFlags.exe on a 64-bit Windows 2003 server in order to get your application to run.

  • corflags mydll /32BIT+

Thanks to the wonderful Procmon utility from SysInternals for helping us out with this issue.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Tuesday, November 11, 2008

Random Notes

Two new blogs:

The News Before The News

Rodrick Brown

Rodrick is one of the few people that mention his ex-employers by name (Lehman and Bank of America), and describes the experiences of working in those places as an Infrastructure Architect for Equities.

Here is a screencast that contrasts Java vs K. I have a resolution that, before I fade away, I will teach myself K. (The creator of the screencast went to Stuyvesant High School. Go Stuy!)

By the way, I hear that Niall Dalton has left Kx Systems. Good luck to Niall, our favorite stunt programmer.



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Monday, November 10, 2008

My Five Minutes of Musical Fame

While working out on the elliptical the other day, I was thinking about my musical career and my former bandmates. I never became a professional musician, but I have certainly had some mild touches with great players who went on to become pro musicians.

I first learned to play the drum set as a sophomore in high school. My very first band was based in a town called Forest Hills Gardens in Queens, New York. We were best friends with another band, and all of us used to jam with each other. The lead guitarist of the other band was guy by the name of Chieli Minucci (http://www.chielimusic.com/). Chieli went on to be a world-known guitarist in the smooth jazz circles as the leader of a band called SFX.

As a junior, I played with a progressive rock group from Long Island called Heresy. We played some pretty sophisticated covers of songs by ELP (we played the entire Karn Evil 9 suite), King Crimson, Gentle Giant, and Jethro Tull (the entire Thick as a Brick). The bassist, Tony Garone (http://www.garone.net/tony) has gone on to play with Jethro Tull at some of the Tull conventions.

As a senior in high school, I played with an experimental free-jazz-rock-space band called Third Sun. Although the sax/flute/synth play, Douglass Walker, went on to become one of the early pioneers of the American Space Rock scene, the player who had the most success was Pablo “Coca” Calagero (http://www.myspace.com/pablocalogero). He went on to play sax with people like Tito Puente, Eddie Palmieri, and other famous people in the Latin Jazz scene. He also had a part in the movie The Mambo Kings.

When I went to college, within the first few weeks, I hooked up with a free jazz group. The trumpet player was Richard Edson. He went on to fame as the first drummer of the noise group Sonic Youth. Richard went on to be an actor, and had significant roles in movies like Good Morning Vietnam and Do the Right Thing. Most recently, he has been on a series of commercials for the Travelers Insurance Company as the human embodiment of risk.

After my sophomore year of college, I pretty much gave up on playing with small groups. I dove into the world of classical music, and from then on, the only groups I cared to play with were orchestras and wind ensembles, in addition to playing solo percussion and marimba. But, by association, I have had my five minutes of fame.



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, November 09, 2008

Welcome to Jeff

Jeff joins the Complex Event Processing team next week. He was recently with Oracle, and he will be helping us out as we start to take our system across different asset classes and will help us roll out CEP on a global basis.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Monday, November 03, 2008

Microsoft Oslo SDK Setup Failure

It looks like you need SQL Server 2008 installed in order to use Oslo. Microsoft does not state this anywhere, nor does its installation program warn you of this. Here is my experience in trying to install Oslo on my laptop.


1) I downloaded the Oslo SDK CTP from the Microsoft site.

2) I extracted the files, and ran the setup. The setup completed without any warnings and error messages. So far so good.

3) I have Windows Vista SP1 with both SQL Server 2005 and SQL Server Express 2005 installed. I opened up SQL Server Management Studio and searched both servers for any trace of the Oslo repositories. No luck.

4) In the directory C:\Users\Marc\AppData\Local\Temp, I found a file called Oslo_Repository_Setup.log. I opened this file in Notepad and found the following lines:

Property(C): ExecuteSilentCmd = "C:\Program Files\Microsoft Repository\CreateRepository.exe" /v+
Property(C): WIXUI_EXITDIALOGOPTIONALTEXT = Warning: repository database creation failed. Please see %temp%\RepositorySetup.log for more details.

However, at the very end of the file, I found the following lines:

MSI (c) (B0:48) [06:51:33:985]: Product: Microsoft Codename "Oslo" Repository -- Installation completed successfully.

MSI (c) (B0:48) [06:51:33:987]: Windows Installer installed the product. Product Name: Microsoft Codename "Oslo" Repository. Product Version: 3.0.1342.0. Product Language: 1033. Installation success or error status: 0.

So, it looks like the installer considered the installation to be a success, even though the repository could not be created.

5) In the file RepositorySetup.log, I found the following lines:

message: Creating repository database ...

error: Cannot assign a default value to a local variable.

Must declare the scalar variable "@login".

[11/3/2008 - 6:51:24]Completed execution of: "C:\Program Files\Microsoft Repository\CreateRepository.exe" /v+


6) We see other users who have reported this. Microsoft says that this is not a bug. It is "By Design".

7) At this link, we find a comment by Carlos Gomez of Microsoft:

We do not support Sql Server 2005 at all, because some of the repository features depend on Sql Server Katmai.

In other words, you need to have SQL Server 2008 installed. Well, my company is still on SS 2005, so I guess that means that Oslo will be just for my personal experimentation for the forseeable future.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, November 02, 2008

Colin Clark - The New Head of FinServ at Coral8

Colin told me that I can post the good news .... he just accepted the job of Executive Vice President of Financial Services at Coral8, reporting directly to the CEO.

This is a great thing for Coral8.

As I have written here before, I feel that financial services domain knowledge was not one of Coral8's strong points, and I think that they would be the first to admit this. I personally think that it is difficult to target Wall Street firms from the dry air of Silicon Valley. I would venture to say that not one of the people who we interact with at Coral8 comes from a financial services background, and although they tried very hard, they were never able to give us a full-time support person in New York, let alone someone who knew anything about trading apps.

Now in the space of a few weeks, Coral8 bagged Mike DiStefano of Gemstone and Colin Clark, the ex-CEO of Kaskad. And, just when I thought that Coral8 was going to de-emphasize their efforts in financial services (not that I would have blamed them!).

Colin will be building up a financial services organization in the US and in continental Europe and the UK. Coral8 has no current sales organization on the right side of the pond, and when we go global with our CEP system, it is nice to have local support from the vendor.

What is great for us is that Colin actually cares about what we, as a capital markets firm, do with Coral8. And, let me tell you ... we have a laundry list for Colin that will keep him busy for many months. We have requests in the areas of real-time OLAP, entitled subscriptions, persistence, object cache connectivity, performance improvements with ad-hoc queries in the Coral8 public windows, better .NET support (which we have been promised already), better profiling and performance monitoring, better documentation, and more.

And, did I mention that Colin is a pilot? (My goal is to have our vendors stocked with pilots and percussionists.)

Welcome to Coral8, Colin. I hope you are ready for us, cause we are coming at you like one of those planes that you see in the Reno Air Races.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Thursday, October 30, 2008

CEP - A Legend in its own Mind?

In the wake of the current financial crisis, several enterprising journalists have been trying to link the use of CEP, EDA, and SOA with the prevention of further financial woes. I have been sitting back and chuckling at many of these attempts to equate a three-letter acronym with financial salvation, especially ones written by people who don't actually work in finance and have never set foot on a trading floor. However, I cannot remain silent anymore.

I am reading an article that just appeared in Wall Street & Technology magazine titled Wall Street Firms Using CEP to Measure and Manage Risk. Directly under the title of the article is the proclamation:

New complex event processing applications promise to help firms get a better handle on their risk exposure, but can CEP erase Wall Street’s risk management woes?

This is an example of the sensationalistic headlines that have been crossing the various blogs and trade magazines in the past few weeks. All of a sudden, CEP has become a three-letter word for Financial Nirvana.

People .... I have news for you ....

CEP systems are simple tools. They take streams of data and produce some output whenever something in the streams fit a certain condition. That's all it is! CEP systems do not do predictive analysis. They won't tell you when you will be losing money in the future.

They are like compilers. Saying that CEP can help with risk management is the same thing as saying a C++ compiler can help you with risk management. CEP products and C++ compiler are merely tools. You are supposed to be providing the logic and the interpretive abilities.

All banks have risk management systems, and most (if not all) have NOT been written using CEP products. Most of them are old, legacy systems written in C++. And, most of them do the same things that CEP products do. They take real-time streams of market data, positions, trades, and P&L, and they crunch them together to produce some useful output. And believe it or not, they work well despite the fact that they are not using commodity CEP technology!

And, many risk systems are run as an end-of-day processes. They are not real-time. They produce management reports which are read every morning. It is not the fault of the risk management system or the underlying technology if management chooses to ignore a risk report or chooses to put on an exotic trade whose risk can't be measured, or chooses to go all-in with subprime mortgages.

Risk management systems were here long before CEP became a buzzword and will remain long after all of the commodity CEP products have vanished.

I can imagine that, with the journalistic feeding frenzy associating CEP and risk management, smart marketeers like Terry Cunningham, Mark Palmer, and Don DeLoach are grinning from ear to ear. Not only grinning, but also doing their bit to fan the flames, as any smart entrepreneur would do.

Let's look at the article a bit.

But CEP vendors say their software can give risk managers a better view of such counterparty risk. "No software can replace [good] judgment," says Jeff Wooten, VP of Aleri. "What [CEP] software can do is give you better information with which to make those judgments and a better understanding of where you stand."

Ugh. Why could a system written with Aleri give you a better view of your risk than a custom-coded system? Will Aleri help you write more sophisticated detection rules?

Undoubtedly, Aleri will enable you to write a new risk management system much quicker than if you were to code one from scratch. But, I don't see that the CEP system will give you "better information". (To be fair, since Aleri has a real-time OLAP product, you might get better information by using the Aleri-specific visualization tool. Maybe that is what Jeff was implying.)

"Our software can't help you predict what's going to happen with your counterparty; it can't help you predict that Lehman will declare bankruptcy," Wooten adds. "But it can help you know what your exposure to Lehman is."

Knowing your exposure to Lehman is conceptually a very simple task that can be done without the help of a CEP system. However, what if your exposure is tied up in complex derivatives? How would a CEP system help you there? I am pretty sure that all of the people who lost money in Lehman bonds knew precisely what their exposure was, without the help of a CEP system.

"It's predictive; it's [based on] probability and in some cases the CEP engine will grab something it doesn't need," Greene acknowledges. "But when you look at more-complex instruments that can take weeks or months to settle because of issues on the back end, the CEP engine that can help them automatically grab information ahead of time behind the scenes speeds that up."

Hmm ..... Can anyone decipher Spence's words for me? I think that Spence is kinda hinting about what I mentioned above, with the complexities involved in figuring out your P&L based on very complex structured products. But I would like to know exactly how Tibco Business Events assists in figuring out this exposure, and why it would be harder to do if you were to use some custom code or Excel.

Despite CEP vendors' promises, though, there are those who feel the value of CEP technology to risk management is finite, pointing to limitations of the technology itself and to the fact that risk management involves more than looking at numbers.

Bingo! And, directly after this statement, Tim Bass weighs in with some of his very correct opinions.

Another dissenting voice is Miles Kumaresan, head of quantitative trading at proprietary trading firm TransMarket Group. "The problem we have right now is the credit market, and that has nothing to do with complex risk models," he told WS&T in late September. "To do risk assessment you don't need CEP. It's much more important to actually use the risk numbers that are already available."

Double Bingo!

CEP vendors sell a tool. This tool enables you to take various real-time streams of data, and allows you to correlate the various streams in various ways. They are very useful tools. But, if you have a working risk management system, you don't need to go out and start rearchitecting your systems right away. The current crisis is bigger than any risk management system. No risk management system would have stopped SAC and Greenlight Capital from losing billions of dollars on the Volkswagon short squeeze. There is a herd mentality on Wall Street, and CEP/SOA/EDA was not going to stop SAC from accumulating this particular short position.

Where would a CEP system be useful? I might use CEP if I would build a new risk system that aggregates output from other legacy risk systems in order to present an enterprise-wide view of risk. I would use CEP to build a brand new risk management system, but only if I could not find one from a vendor that fit my needs. (And, if I might allow myself to jump on a fashionable buzzword, I might eventually find risk management applications in "the cloud", maybe as a service offered in Microsoft Azure?)

CEP, SOA, EDA are concepts and there are tools that implement these concepts. Don't ever mistake them for something that will give you immediate safety from all of the wolves out there.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Saturday, October 25, 2008

New Coral8 Users group on LinkedIn

There is a new "Coral8 Users" discussion group on LinkedIn. Please consider joining if you are a user of Coral8 or are interested in learning more about coral8.

Please note that this group is not being sponsored by Coral8 in any way. This is a user-sponsored group. It is a way for Coral8 users to help each other and to discuss Coral8 without involving the staff of Coral8 at all.



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Forays into C++

We have a C#/.NET-based Reuters market data adapter that we wrote ourselves (well, actually, I wrote it). This is an out-of-process Coral8 adapter that reads real-time ticks from RMDS, turns then into Coral8 tuples, and feeds them into our Coral8 engine. The adapter has two caches in it, and since market data is last reliable, we can publish the last-reliable ticks from one of the caches at specific intervals.

When we hooked the market data adapter up to our Coral8 engine, we started seeing an immediate backup in the pending message queue. We were sending Coral8 about 3000 ticks per second. In addition, our Coral8 engine was processing our order flow, which could hit a max of another 3000 messages per second.

My first thought was that it was taking too long for Coral8 to deserialize the market data tuple, so I wanted to transform our market data adapter from an out-of-process adapter to an in-process adapter. Unfortunately, in-process adapters have to be written as a vanilla, Win32-based C/C++ DLL.

I grew up on C++. However, I have not touched a lick of C++ since diving into .NET in 2001. I was scared. I was frightened. However, I know that my teams needs some experience in writing in-process adapters for Coral8, and since my team consists of very strong .NET and Java people, I decided to bite the bullet and take one for the team.

Man, oh man! I can't believe how much I had forgotten! Strange symbols like '*' and '->' were coming from my fingers. Compilation errors sprang up everywhere. There was no Resharper to help me along. Linker errors to strange obfuscated functions that I swear I never wrote started appearing in the error window.

Perhaps I need to use a little bit of extern "C" here? Maybe there? Maybe everywhere!

Where was my .NET Thread class? Where was System.Timer? How do I do events? There is a new kind of pain that you have to go through in order to get callbacks to work within templated C++ classes. It took me hours to get simple callbacks to work.

Finally, after two days of work, I got the market data adapter ported over to Coral8. To start with, let's try reading the ticks from a flat data file instead of from Reuters. Whew, it works. I see the ticks appearing in the Coral8 stream viewer. Now, let's try Reuters. After a bit of tweaking and some crashes on the destructors, ticks are finally coming in.

Then, I get an email from our Coral8 developer telling me that the reason why the market data feed was slow in Coral8 was because of one of those mysterious coral8 queries that he wrote that clogged the system! So, my inproc adapter was not needed anymore. Market data was now zooming through our Coral8 engine.

Nevertheless, it was a fascinating experience to return to my C++ roots from a 7-year hiatus in the .NET world. Never again!




©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Coral8 and Performance

From the head of our CEP engine development team (used with his permission) :

I'm pleasantly surprised at how fast c8 is when one gets it right. (On the other hand, one misstep and performance goes over the cliff.)


This is one of the main problems with the variants of Streaming SQL that you find in many of the CEP engines. It is not at all transparent what goes on under the hood of all of the CEP engines when you are writing complex queries in Streaming SQL.

Unlike Microsoft, where you find very good profiling tools in SQL Server, the CEP vendors do not provide the necessary tools that will enable a developer to isolate bottlenecks. This has been an issue with Coral8, which they recognize and hope to remedy in the future.

Every time that I've cursed out Coral8 for what I might think is lousy performance, it has turned out that it was us that managed to write a query that clogged the engine. And, when you clog up Coral8, it can bring an entire machine down very quickly as the pending message count builds up.

The important thing is about choosing a CEP vendor is the level of technical support, and despite some turnover in Coral8's tech support staff (goodbye to the excellent Trahn), we have been able to have access to their chief architect and to the head of development when we have really gotten ourselves into trouble. Coral8 also recently hired a New York-based support person who used to be an architect with Gemstone. So, even though it will take this guy some time to get up to speed with Coral8, we are glad to have a local person to help us when we need the assistance.

Coral8 has just released version 5.5, which will be a major help to us in terms of real-time OLAP and entitled subscriptions. It is a major release that will force us to refactor some of our code. But, it is good to see that Coral8 is making it easier for us to implement our real-time dashboards where every user can possibly be entitled to see a different view of data. I feel that support for real-time OLAP is going to be a major selling point for CEP systems, as it is *really hard* to implement it totally on your own.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

CEP Updates from the Front

It feels like forever since I was able to find the time to blog last. There are millions of things to write and so little time.

We finally rolled out our CEP system to the heads of the Equities Trading department. It is sitting on the desks of the Head of Equities Trading and the Head of North American Trading, and will start to be rolled out to the individual traders in short order.

The CEP system gave us a nice "victory" last week. One one of the very big volume mornings, our system was showing that a certain customer type was not buying any financial stocks. The Head of Trading came to us and told us that our system was showing wrong data. Then, he started to call around to his individual traders, and what'dya know ... this customer was not buying financials.

So, it seems like our system is already starting to give the department heads the insight into trading activity that they haven't had before. And, it is giving them the information in a totally new visual paradigm that is so far away from the simple flashing grids that most traders are used to.

We have just started to integrate market data into our system ... but that's the subject of my next blog posting.

The big steps for 2009 will be to start to take our system global. This means having region-specific instances of CEP, and having each region feed into a central CEP hub in order to let people do cross-region correlations. This will bring us new challenges in terms of data quality, as we will have to deal with unifying symbologies, currencies, compliance issues, and more.

... Of course, this is assuming that we all have jobs next year!





©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Thursday, October 09, 2008

Orange Widgets

A recent posting from Jack Rusher of Aleri reminded me about Orange. I had looked at Orange a few months ago very briefly, but it looks like, over this past summer, the Orange Project ramped things up and put out the first official version of their system.

This looks a lot like what I want to do with a Yahoo Pipes-like system. The guys from the Microsoft Oslo team should also read this and see if they can get any additional inspiration. (It's nice to see that Aleri is also looking at this stuff ....)

I will be reading further to see what it would take to come up with an input adapter for Orange that can read a Coral8 tuple. Hopefully, they already have adapters that can read from SQL Server and KDB (yeah, right!)

And, what about Orange's capability to analyze real-time data? Can we put order flow through Orange? Do we have to aggregate order flow and feed Orange at one-minute intervals?

Visual tools for data mining are starting to become more and more widely used. There are tools like Tibco Spotfire and Tableau that are used a lot in Pharma and Financial Services. (There are some new ones on the horizon that I cannot talk about that makes these tools seem like MS-DOS prompts). But, Spotfire and tableau really do not handle real-time flows of data.

The elusive goal is to provide great visualization for true, real-time OLAP. On my CEP team, we had to write a custom real-time OLAP GUI ... how long before these kinds of tools become commoditized?



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Wednesday, October 08, 2008

New Aite Report Slams Microsoft's Capital Markets Partner Program

A blog reader sent me this link to a new report by The Aite Group.

Below is a copy of the report summary from the Aite website. I will comment further in another blog posting.



Of vendors engaged with Microsoft's Capital Markets Partner Program, 55% rate Microsoft's overall satisfaction with the relationship as mediocre or poor.

Boston, MA, October 6, 2008 – A new report from Aite Group, LLC evaluates the effectiveness of Microsoft's Partner Program as a capital markets vertical strategy within a horizontal company. The report outlines what vendors engaged in a partnership with Microsoft can expect, best practices for other horizontal vendors looking to focus on capital markets, and ways in which vendors and capital markets firms can maximize their technology investment in Microsoft.



The report is based on interviews conducted with 21 partners in the Microsoft Partner Program who all offer capital markets solutions, as well as current and former Microsoft employees, customers, and non-partners working with Microsoft Capital Markets. Aite Group found that Microsoft's strategy of developing its vertical partnership program may have hurt its reputation, as 55% of partners in this program rate their overall satisfaction with Microsoft as mediocre or poor. Though Microsoft is criticized for its lack of knowledge of the capital markets vertical, it is viewed favorably for its architecture and engineering, as well as for its technical support.

"While Microsoft's horizontal rivals, including IBM and Oracle, were busy building capital markets offerings through acquisitions, Microsoft maintained a partner program whereby it supplied the underlying platform upon which other software providers could build vertically specific solutions," says Adam Honoré, senior analyst with Aite Group and author of this report. "As a vertical strategy, Microsoft is not the feared giant it is perceived to be in many horizontal venues. Instead, Microsoft has much work to do before it is perceived as anything other than a mediocre performer in its vertical go-to-market strategy."





©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Tuesday, October 07, 2008

Microsoft Oslo and our App

For some reason, Microsoft seems to think that I am an "influencer" in Wall Street, and that plying me with large steaks and bottles of wine will make me happy (they are certainly correct in the second assumption).

Last night, I had a very nice dinner with Robert Wahbe and Steven Martin of Microsoft. They are large in the Connected Systems Division, which is the group that is overseeing the development of the mysterious Microsoft Oslo product/framework. We had a happy group that included the famous DonXML, Ambrose from Infragistics, Bill Zack, and a charming lady from Waggoner Edstrom whose job it was to make sure that Robert and Steven did not disclose too many of Microsoft's secrets (although I ended up telling them a few little ditties about Microsoft that they were not aware of!).

I have been sworn to secrecy, so I am going to regurgitate what little known facts about Oslo has been leaked out through various blogs. Oslo contains three things:

1) A new language that enables you to create your own Domain-Specific Languages (DSL).

2) A visual modeling tool.

3) A repository for models that you build.


What would we do with Oslo in our CEP system?

We would like our traders to be able to eventually build their own queries/models, and "inject" them into our CEP system. We do not want the traders coding in Coral8's CCL or any other streaming SQL language for that matter. And, we want the traders to play in our "sandbox" without hurting themselves or the other users of the system.

If we could create a high-level DSL that, through the use of some sort of "adapter", was translated into Streaming SQL, then we could give that to the traders. A visual modeling tool would allow them to easily construct models (remember how enthusiastic I was about Yahoo Pipes a few months ago?).

Let's see what Microsoft comes up with, and how soon it will be until Oslo is ready to use on a trading floor.



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, October 05, 2008

Benchmarking .NET-based Tranaction Engines (and the LSE)

Although somewhat dated (the infrastructure in from 2004, which can be considered to be light-years in the past), this is a useful read:

Benchmarking a High Performance Real-Time Transaction Engine Design

(Go to the bottom-left of the page, and click on the link that says "View or download")

Even more interesting is the slide deck that accompanied the presentation. Slides 24 to 32 give some insight into the .NET-based architecture of the London Stock Exchange.








©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Has LinkedIn Jumped the Shark?

There are a few annoying tendencies that I have been noticing about LinkedIn for the past several months:

1) Gratuitous recommendations. You open up your Inbox and find that some unremarkable colleague that you worked with five years ago is asking you for a recommendation, and promising to give you a stellar recommendation in return. Usually this ex-colleague is looking to change jobs, and wants to load up on praise. You then have to spend the next 15 minutes remembering what this person actually did on your team, and try to grasp at straws in order to say something positive about him. The verbiage associated with LinkedIn recommendations is always the same. As a potential employer, a LinkedIn recommendation is pure noise.

2) Unsolicited ads and spam on LinkedIn Groups. It seems that new groups pop up every day, taking a simple domain and dicing and slicing it 20 ways. There is a group for Wall Street, a group for Wall Streeters who are left-handed, a group for Wall Streeters who take the subway more than 5 stops, etc. It seems that the moderators of these groups will let anyone who has a pulse into the group.

I joined a group about Wall Street the other day, and the first discussion on the group was from a real estate agent from Prudential Realtors, looking to sell condos and co-ops in New York. Recruiters and outsourcers troll these groups, posting ads for opportunities and peddling their wares, thus avoiding having to pay any fees to LinkedIn for job postings. People like these cheapen the groups.

3) There are a large number of recruiters who send you LinkedIn invitations. Why in the world would I let a recruiter see my contacts? Some of these contacts are fairly high up in the food chain. I don't want a recruiter contacting them. I almost never accept invites from recruiters. There is an implication that, if you have a LinkedIn contact, you know that person and like them enough to let them participate in your network.


LinkedIn is a truly useful tool. However, I think that it has "jumped the shark". LinkedIn needs to give the user additional options to control the increasing amount of spam that is invading the network.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, September 28, 2008

Always Be Coding

Alex Baldwin's most memorable bit of acting comes in the great movie, Glengarry Glen Ross. In giving a motivational speech to a crew of "salesmen", Baldwin invokes the basic tenet of sales, "ABC".

A - Always
B - Be
C - Closing

My adaptation of this is:

A - Always
B - Be
C - Coding


Even though I have managed teams from 1 to 30 people, I never gave up coding. People are amazed when I tell them that I still code.

The ability to write code is a skill like plumbing or being an electrician .... someone will always need coding done. No matter what industry you are in ... finance, healthcare, media, entertainment, e-commerce .... someone always needs you to write a new system or to enhance an old one.

No matter where I go with management, there are reasons why I still like to code:

1) It is creative and relaxing. There is no feeling like being "in the zone" while coding up a new idea. It is left-brain activity which calms the right side.

2) You produce an objective piece of work. At year's end, you can always print out your reams of source code and show others that you actually accomplished something. You have the feeling that you have created something concrete, like a piece of art.

3) You need to have something to fall back on if things ever get rough. This is important for us folks who have 1 kids in college and another kid two years away from entering college.


The other day, I was standing by the large windows of our trading floor, talking with the head of our Stat Arb department, looking at all of the sailboats going up and down the Hudson River. The head of Stat Arb still codes as well. So does the head of our Derivatives Analytics department. These are both Managing Directors and they are still hacking away.

The head of Stat Arb and I were talking about the recent financial crisis and the ongoing wave of mergers. We were both saying that, if our company ever gets acquired, or if we acquired another company which would make our positions redundant, we could still go off and make a living as coders. A great Java coder (with a bit of KDB knowledge) like Mr. Stat Arb, who has deep financial domain knowledge, might be able to make $1500 a day ... I am not sure if consulting rates have collapsed recently, but I assume that some financial institution would be able to bring him on for a short-term contract. (Maybe someone can let me know what the consulting market is doing these days.) I still see plenty of jobs out there for C# hackers in all sorts of industries ... not only UI jobs, but C# server-side jobs as well.

It rained this entire weekend, as Tropical Storm Kyle passed off the East Coast of the United States. My wife was sleeping, as she was recovering from jet lag, and my daughter was busy doing homework and studying for various tests. So, I decided to do something very foolish, and try to teach myself some WPF (Windows Presentation Foundation).

Our Complex Event Processing system uses WPF for the GUI, and so far, I have left my UI developers alone while they coded the GUI. However, I feel that I need to be able to understand their XAML code at some level. So, I opened Visual Studio 2008 and started coding up some simple screens, experimenting mostly with XAML.

I felt a bit silly in trying to learn a new UI paradigm as my management responsibilities increase at work. However, as I started mucking around with XAML, I started to feel myself get "into the zone" again.

Visual Studio 2008 (without SP1) seems to be a pretty crappy environment to develop WPF apps in. I don't use any other tools, like Expression and Blend. This is a really vanilla WPF development environment. (I hear that SP1 is supposed to enhance the WPF design experience, but I am a bit afraid to install it after reading some of the blogs that detailed some horror stores that occured post installation.)

But, give me a few weeks to explore more WPF. I want to get to the point where I can make some simple changes and do a bit of debugging of our GUI if I have to.

Always Be Coding.



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Sunday, September 21, 2008

Colin is Building a CEP App

Colin Clark, is starting to blog about OMICRON 5000, a CEP-based order router/liquidity finder that he wants to build. I am not sure if Colin is going to just sketch the system out on his blog, or if he is going to get down and dirty and start coding this thing. But, I will follow his upcoming exploits very closely, as Colin implies that he will keep us informed during every step of the development process.

I don't know what technology platform Colin is going to choose ... Windows or Linux, C# or C++ or Java. If Colin is very public about this new effort, then I might expect some vendors to throw him some complimentary software, as it would be a tremendous marketing coup for the CEP vendor who eventually is chosen.

Colin's evaluation will take place a year after we did our initial evaluation, and I am interested to see how he ends up choosing the CEP engine. Colin is a company of one. *IF* Colin can get his hands on some CEP software, he will not have the same luxury that we did --- 4 months devoted to research --- to make a prolonged evaluation.

Let's see what a company of one can do to get bootstrapped into the world of CEP.

How will Colin get his hands on the CEP software to evaluate? Coral8 is free to download, and it is very easy to get started with their developer version. You can download Aleri, but from what I remember, it comes with a 30-day license. Progress Apama would not let me download their software without requiring a dog-and-pony show. Streambase gave us their software, so I don't know anything about their evaluation process. Esper is totally free (at least, their non-HA version is). I wonder if Paul Vincent will throw Colin a copy of Business Events to play with.

Once Colin chooses a CEP engine, how much will he have to spend to get a production license for the software? Esper is free. The others require a minimum outlay of $60,000, not including yearly maintenance fees. You can play with your downloaded version of Coral8 all you want, but you cannot deploy it into a production app without purchasing a license.

Colin says that OMICRON needs a database. I assume that MySQL is still free, even after the Sun purchase. You need to find a CEP engine that has MySQL adapters.

OMICRON needs a FIX engine. The only free one that I know of is QuickFix. Will Colin build an in-process or out-of-process FIX adapter for his CEP engine?

OMICRON needs market data. OpenTick is free to use, and all you need to do is pay the exchange fees. Worse comes to worse, you can write a service that scrapes Yahoo Finance, but I am sure that will not satisfy Colin (although it is useful for early-stage development).

Finally, Colin needs a GUI. If Colin uses C#/.NET, there are plenty of controls that come with Visual Studio, and there are tons of others that you can find on sites like CodeProject. I am assuming that all OMICRON needs is a simple grid, with perhaps some real-time updating of the cells.

It seems that a single developer can get away with totally free, Open Source software if he was going to choose Esper. There might be CEP engines systems from universities that you can use.

I will eagerly follow Colin's blog and kibbitz on his every move....


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

The SQLDependency Object and Entitlements

Our CEP Servers have to run 24x6. When our server starts up, it reads some tables in our SQL Server 2005 database that contains information about users and the alerts that they are entitled to see. We don't want most of our users to be able to see alerts that occur because of order messages, but we do want our users to see alerts that occur from execution reports.

If we had to query the database for the entitlements on every single alert that flows through our system, we would take a tremendous hit in performance. Therefore, we cache the entitlements in memory and cache the result sets.

We want to be able to change our entitlements on-the-fly, while the server is running. We want to add and delete users, change the method of notification for a user (Tibco, SMS, email, messenger, etc), and add new alerts to our system. We could restrict all changes to the underlying entitlements tables so that the changes would have to made made through an Administrative GUI, but because of the environment inside our company, we cannot. Someone might add a user through the Admin GUI, and others might decide to go right into our SQL Server database and add some new rows to our tables.

We need a way to detect when the entitlement tables have changed within our SQL Server database while our server in running, and have those changes reflected in the operation of our system. We cannot afford to bring our system down, make the changes to the entitlements, and restart our system. We also don't want to wait until the end of the week for the "green zone" period, where we can safely bring down our system, make the changes, and restart the system.

SQL Server 2005 has a class called SqlDependency. This class can be used as an interface between SQL Service Broker and a .NET application to inform the .NET application when a certain query's result set has changed.

Most of the literature on the SqlDependency class revolves around using it in an ASP.NET application. There are also a bunch of things that you need to be aware of, such as restrictions on the syntax of the query that you give to the SqlDependency class. It took a little bit of digging and experimentation for me to get this stuff to work, but after a little while, we found that it did the job nicely.

I have enclosed the source code to a simple EntitlementTableWatcher class that shows you how to use SqlDependency. I have also include some links to URLs that I found useful. If you have any comments about the code, or if you find any bugs, please let me know and I will correct them.



#region Entitlements Table Watcher
internal class EntitlementTableChangedWatcher : DisposableObject
{
// http://msdn.microsoft.com/en-us/library/ms379594.aspx is a good article by Developmentor

private readonly EntitlementModel m_entitlementModel;
private SqlConnection m_connection;
private SqlDependency m_dependency;
private SqlCommand m_command;
private SqlDataReader m_reader;

public EntitlementTableChangedWatcher(EntitlementModel entitlementModel)
{
SqlDependency.Stop(ConnectionString);
SqlDependency.Start(ConnectionString);

this.m_connection = new SqlConnection(ConnectionString);
this.m_connection.Open();

this.m_entitlementModel = entitlementModel;
}

protected override void Free(bool disposedByUser)
{
if (disposedByUser)
{
this.CleanupCommand();
if (this.m_connection != null)
{
this.m_connection.Dispose();
this.m_connection = null;
}
SqlDependency.Stop(ConnectionString);
}

base.Free(disposedByUser);
}

private void CleanupCommand()
{
// In case we entered here from the callback, we need to clean up the last command so that
// we can issue a new command.
if (this.m_reader != null)
{
this.m_reader.Dispose();
this.m_reader = null;
}
if (this.m_command != null)
{
this.m_command.Notification = null;
this.m_command.Dispose();
}
}

public void PollForNotifications()
{
try
{
// In case we entered here from the callback, we need to clean up the last command so that
// we can issue a new command.
this.CleanupCommand();

// Create a new SqlCommand object.
// http://msdn.microsoft.com/en-us/library/aewzkxxh.aspx contains rules for the syntax of the query.
// Make sure that the command has no notification object associated with it
this.m_command = new SqlCommand("SELECT Column1, Column2 FROM dbo.Subscription", this.m_connection)
{Notification = null};

// Create a dependency and associate it with the SqlCommand.
this.m_dependency = new SqlDependency(this.m_command);
this.m_dependency.OnChange += this.OnDependencyChange;

// We want to send a fresh SELECT command to SQL Server on startup and after we receive a notification.
// We use the DataReader to drain the result set resulting from the query. SQL Server will execute
// this command, and set up the entire Query Notification plumbing. SQL Server will execute
// this command internally every once in a while, and will send a one-time event to us when it detects that the
// query has changed. It is up to us to manually subscribe for further events.
this.m_reader = this.m_command.ExecuteReader();
while (this.m_reader.Read())
;
}
catch (Exception exc)
{
Logger.Log.Error("Problem in the Server's entitlement table watcher", exc);
}
}

void OnDependencyChange(object sender, SqlNotificationEventArgs e)
{
// This event firing is a one-time thing ... we have to manually
// reset everything in order to receive another event
SqlDependency dependency = (SqlDependency) sender;
dependency.OnChange -= this.OnDependencyChange;

// Process the event
if (e.Type == SqlNotificationType.Change)
{
this.m_entitlementModel.ClearEntitlementsCache();
}

// Go back and wait for another event
this.PollForNotifications();
}
}
#endregion



©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Saturday, September 20, 2008

Colin Clark has a Blog

Colin is the former CEO of Kaskad Technology, and is very involved int he Event Processing world. His new blog can be found at http://colinclarkeventprocessing.com


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Friday, September 19, 2008

Wow! What a Week!

It's finally the Friday morning of one of the most amazing weeks of my life. There are so, so many things to write about. I often use this blog as a "diary" for myself, so that I can look back 10 years from now and see the state of the world and my career at that time. And, this has to rank as a life-changing week.

Yesterday may have been the most thrilling ride in the stock market that I have ever seen. The day played out like a football game. I arrived in the morning, and watched the market try to mount a comeback after a nearly 500 point drop the day before. The Dow was up about 150 points, but all of a sudden, the rally started to lose steam. What was happening? Oh no! Several huge money market funds said that they broke the buck. The Dow plunged from 150 to about -100. I have watching Goldman and Morgan trade like penny stocks, as Goldman trades below 90 and Morgan trades below 13.

But wait, the Dow is mounting a huge rally, back into positive territory. What is happening now? It's our brethren in Britain banning short sales. Huge rally under way.

Now, another jump in the Dow. What's going on? It's the Yanks' turn now. CALPERS and a few other huge retirement funds coming out and saying that they won't lend out their shares of financial companies for the shorts. CNBC is saying that it's an all-out war against the shorts. God bless Apple Pie and the USA!

I go into a meeting with the Dow up about 100, and a half hour later, I come out and the Dow is up by 400. It's Hank Paulson and the new Resolution Trust Company! Not to be outdone, the SEC now wants to ban shorting of about 800 financial stocks.

As I write this, the FTSE is up around 8%.

My CEP team sits on the Equity Trading Floor, right below the big board in the first row of the floor. Sitting next to me are the two "brokers of the stars" .... the two guys who are the brokers for the traders themselves. Traders are screaming all around me. Applause breaks out spontaneously.

I may end up penniless, but at least, you cannot ever say that it has been boring!


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.

Thursday, September 18, 2008

Morgan Stanley and Citi/Wachovia ???

As I drove home from work yesterday, I listened to the reports on Bloomberg Radio about a possible new company called MorganStanleyWachovia or MorganStanleyCiti.

Most people think about the financial ramifications of such a marriage. I think of the ramifications for the IT departments, especially focusing on trading technology.

I think that both of the combinations would be extremely difficult to integrate as far as IT goes. The IT culture of Morgan Stanley is so much different that any other place (with maybe the exception of Goldman). Morgan is very much of a build-it-yourself culture. They have their own messaging system (CPS) and their own ticker plant (Filter). The mad Russian scientists who populate the various IT departments pride themselves on the code that they right. It is much much different than the IT culture that you find in large banks like Citi, Wachovia, Well Fargo, etc.

It is public information that a lot of Morgan Stanley talent has recently migrated to Citi. Vikram Pandit is ex-Morgan, and you see many more ex-Morganites starting to populate the executive ranks. So, the "pump is primed" at Citi for a Morgan Stanley merger, at least at the executive levels. But the IT cultures are so radically different, both in terms of culture and technology, that I think that it will take a very long time to integrate.

Wachovia always had the relatively laid-back Charlotte way of doing things. Lots of legacy technology swimming around their halls. A few years ago, Wachovia established a base in New York, and was able to lure a lot of good Goldman people away. But, there is still that back-and-forth between New York and Charlotte, and most of the Wachovia people that I know in New York have to fly to Charlotte on a regular basis to get their marching directions. This is not the Morgan way of doing things.

BankOfAmericaMerrill is going to be another fun ride on the IT-integration-ferris wheel. Merrill has some really good technology. Some is very new and some is really old (the CICS-based systems that were written 20 years ago are still fairly important).

This is a time when Integration Architecture might be the hottest skill set on Wall Street. I expect boom times for companies like Accenture, IBM, Capco, and the other large consulting firms that specialize in technology integration.


©2008 Marc Adler - All Rights Reserved.
All opinions here are personal, and have no relation to my employer.