Oops, I missed a day. The plan was… just a plan and plans are there to be changed. Sticking to the plan, I could have done a blog about hockey or rum & coke. But all Canadians know by now that I talk about field hockey and not ice hockey. So, I’m far from an expert on that subject. And the rum & coke last night, made it a major risk to write something and share it with the world.
Now that the first day of the expo is over and without plans for a night downtown with a drink or two, this is a good time to make this into a double blog.
The second half will be about the venue itself, the good, the bad and the ugly of my first day at Enterprise World after two days at the PartnerSummit. But first we need to be serious about the other products in the OpenText stack (or should I say opentext stack? If I’m correct in the new brand OpenText is all lower case??)
In this first part of the blog, I will talk about the product that I think will be a cash cow for Informed Consulting. Almost every expert I talk to within OpenText tells me that InfoArchive is the cash cow and stubborn as I am, I think of a different product. InfoArchive is not new to us. As a person who has a very high respect for Jeroen van Rotterdam (the visionary behind InfoArchive) I should be jumping to sell and implement this, but I’m not (yet). I’m not saying it is not a cash cow, but I’m saying, it is not a cash cow for Informed Consulting (yet).
Let me explain. Our focus is heavily on compliancy demanding organizations like life sciences, engineering and regional government. The management, responsible for deciding about these types of implementations is always the business, not IT. InfoArchive is a solution focused on lowering the IT cost. A pure CTO decision. So to find the right person to talk to and gain their trust, is a lengthy process. Secondly, we’ve talked to a number of our clients about InfoArchive and they all recognize the problem, even could describe in detail what they would do to solve it, but as long as the legacy systems are running there are more pressing matters at hand that need to be solved. And lastly, we at Informed Consulting already had a number of discussions about how to setup a good and structured analysis process to define how you should setup your InfoArchive metadata structure. The question being: what are the 10 to 20 top questions to ask to understand how the data from the different solutions would be structured together. You could completely separate all different content/data parts but than you lose a lot of extra value.
So, if it is not InfoArchive, what will it be? Why do you ask? This should really be a no-brainer: ANALYTICS – iHub and friends. I’m adding the perfect slide that I got in one of the sessions, because I think this picture says more than a 1,000 words.
This approach to analytics is so simple and yet so perfect, that you (we) should be able to convince every customer that this is a mandatory product. The problems with the information overload is clear to anyone. The need to bring structure to the massive piles of unstructured information is also clear. The fact that you need to use the available structured data to bring more structure in unstructured data can be distilled easily from that. So what are the challenges?
- It should be configuration only. If you need too much IT involvement when configuring/adding new data streams to it, it will lose traction with the business and they will try other options, like MS Access, MS Excel or other stuff. I don’t know the answer from iHub yet on this, but my first impression is very good.
- Clear description/selection about which process to follow when adding a data stream: is it unstructured data?, does it need text analysis?, does it need interpretation?, does it need AI analysis?, etc. etc.
- Easy to create and adapt the user interface: With dashboards, the user experience is everything. The problem with that is that currently the standards for a good design are changing on an almost daily basis. So this should be very simple and very flexible.
- An easy wysiwyg editor to relate data from different streams to identify the relations between data.
- Pricing: for the delivered functionality, a good (high) price is very justifiable, but there is a major risk that point solutions, like repository specific reporting tools with a much lower price, will be chosen because they can deliver the base trick. So there needs to be a lightweight start module that will get the solution going to easily upgrade once the demand is there.
As I don’t know for sure any of the answers on these 5 challenges, I have my work cut out for me the coming weeks. So, the product needs to be in very bad shape not to end up in the portfolio of Informed Consulting (if we are permitted to resell and implement it ?). A lot of food for thought.
Now for the second part: the good, the bad and the ugly of the #OTEW venue (not the actual content of the conference, but the setup). As a user conference veteran, you see that all event agencies in North America look closely at each other, and conference venues all use a number of standard concepts. So, there is not much unique on that side to report. But there are some minor points to mention:
- Having buses and tuk-tuks with the OT logo on it makes you feel more special.
- Being able to make your own fresh toast at breakfast for the small crowd of 4,000 eating people. I have not seen this before and it gets a big applause!
- The expo hall is chaotic. All Pods are randomly filled with a product. Not only all partners are random, but there is also no structure in the OT Pod layout. So, I see all attendees scouring the room to find some info. Next time please make information groups.
- No, I’m not sharing pictures of all the sweating people I saw after breakfast. It is 3 floors up, walk over the street, 2 floors down again, a nice walk and then end it all with another floor down to get to the expo. Yes elevators… but still, this is nasty.
Going to think with my eyes closed now…