Archive for the ‘Analytics’ Category

Hidden reset flag for WebCenter Interaction Analytics

Tuesday, March 25th, 2014

Recently I worked on a migration from some WebCenter Interaction servers to a virtual environment (you’re still using physical servers? How 2013…).

After the migration, there were some documents that weren’t showing up in WebCenter Analytics reports. As you probably know, Analytics events are stored in the ASFACT tables using objectIDs, and information about those events are captured in the ASDIM (dimension) tables to record information about those objectIDs (it’s done this way so that, even if you delete the object in the portal, you don’t lose the record of its access). But during some extensive testing, logging, and code decompiling, I discovered some interesting things that may be useful to you:

  • You can dramatically increase the logging that the Analytics sync job writes by changing the log settings to DEBUG in %PT_HOME%\ptanalytics\10.3.0\settings\config\ This will give you details all the way down to the exact queries being run against the database.
  • The number of objects synched to the dimension tables, along with the results of the synch jobs, are stored in the ASSYS_JOBLOGS table – and that table is never cleared:
  • The Analytics Sync jobs that run nightly to update those dimension tables don’t actually look at all objects in the portal database; they look only look for objects that have been updated since the last time the job has run. This time comes from that ASSYS_JOBLOGS table.
  • There’s a hidden flag in the source code for the job called “-reset” that clears this job log table for that particular job entry, causing all objects for that job type to be re-scanned:

Bottom line: if some of your Analytics reports don’t seem to contain all of the objects you’re expecting, it’s possible that events have been recorded but the dimensions simply don’t exist for them. If that’s the case, you can resolve this by adding -reset to the external operation in the WCI portal. Just remember to remove the flag after the job runs so that you’re not constantly clearing the logs every night and generating unnecessary load. (more…)

Monitor WCI Analytics with a DB query

Wednesday, May 9th, 2012

Lately I’ve found myself crafting all kinds of database queries for everything from monitoring to reporting, and I’ll be sharing many of those queries on this blog in the coming weeks. Today’s post answers a simple question: If a WebCenter Analytics service falls in the forest and no one is around to hear it, does it make a sound?

Or, more to the point, occasionally the Analytics Collector service keeps running (so many of your existing monitors don’t see it as “down”), but it stops recording data for one reason or another.

The trick is to create a monitor that basically says “let me know if no user logins have occurred in the past day”, because if nothing has been recorded, either the site you’re tracking sucks or the Analytics Collector is sucking. Using whatever tool you’d like (HP’s SiteScope is a popular one, and we use Function1’s Watcher on some sites), just create a monitor that runs the below query once per day, and notifies you if ZERO results are returned, which would indicate a problem with the collector:

select count(1) 
from asfact_logins 
where occurred > (getdate() - 1)

That way, you won’t have to explain to the boss why your Analytics report looks like this:

Prevent Analytics From Logging Gigabytes of Data

Wednesday, May 25th, 2011

This one comes courtesy of Fabien Sanglier, a WCI guru of epic proportions (no, he didn’t make me say that…).

At our client site, we noticed that Analytics was logging gigs worth of data to the weblogic.out file – a catch-all file for logging reported via the standard out pipe. Note that IIS doesn’t record logs on this stream, so this is a Weblogic-only problem. The line recorded over and over (one for each hit):

OPENUSAGE(INFO ): ==> Event sent to: //UsageTracking@openusage/ wci-analyticscollector| port=31314/ BYTEARR

To prevent these events from being logged, he suggests updating %PT_HOME%/settings/configuration.xml to reduce logging and turn off console logging:

<component name="analytics-communication:config" type="">
<setting name="logging:level">
<value xsi:type="xsd:string">WARN</value>
<setting name="console-logging:enabled">
<value xsi:type="xsd:boolean">false</value>



WCI Analytics Startup Order

Saturday, May 7th, 2011

If you’re using WebCenter Analytics, you’ve no doubt seen this issue before:

The Analytics Context could not be created.  This is typically due to a configuration problem.  Review the Analytics UI log for more information.

While there are many causes for this and many fixes such as re-scripting the security database, sometimes the simplest solution is overlooked: startup order.

When Analytics needs to be (re)started, the services need to be restarted in the proper order:

  1. WSAPI Server.  The API Server provides SOAP access to the portal objects, such as users.
  2. LDAP Directory Service.  The LDAP Directory Service connects to the API Server to surface Plumtree users and groups via LDAP.
  3. Analytics UI.  This is the service that ultimately provides all the fancy reporting, and can’t work without the other two already set up, since it needs to check credentials (which introduces its own set of problems):

As a side note, the Analytics Collector doesn’t require the API or LDAP service.  It simply accepts inbound events such as searches and logins via UDP from the portal and records themto the database.  It’s a good thing that the services are separate, since even if the UI isn’t working, in most cases you can be reasonably confident events are still be recorded and not lost forever.


Bug Blog 11: Analytics doesn’t export Document Details

Thursday, April 7th, 2011

We’ve discussed WCI Analytics many times in these posts, and have covered quite a few bugs and patches. This post has all of that drama; so join me! You’ll laugh, you’ll cry. You’ll buy the book.

Every now and then, products in the WebCenter Interaction stack have a bug. And occasionally, Oracle releases a fix for said bug, and things are right with the universe again. But, once in a blue moon, that patch disappears when then the next version of the product is introduced. Such is the case with the “WebCenter Analytics Documents Report Export May Return Different Report [ID 783591.1]” issue.  The patch addresses this problem:

If you choose to “Export User Detail” for the “Other Metrics” section, “Documents” tab report, the results from a different report will be exported.

Or, more succinctly: if you export the User Detail report in Analytics for Collab Documents, it will not actually include document details.  The feature worked in the Aqualogic days, so what’s the deal now?  Well, the story is that you used to be able to export that report properly, then it was broken in ALI Analytics 2.5.  Oracle released a hotfix in 2.5 to repair the issue last year, and their release notes for the patch say to install patch 8198674, or upgrade to Analytics 10.3.

Problem is: the patch that worked for Analytics 2.5 isn’t applicable to 10.3, and 10.3 doesn’t include the patch.

Solution?  I’ll spare you the details of this pretty complicated trick, but at a high level, you need to:

  1. Download this fixed version of BaseCollabServerDataProvider.class, add it to analytics-webui.jar, and put that back into analytics.war.
  2. Add this line to the ptanalytics\10.3.0\settings\config\wrapper.conf file:

  3. Reinstall the service (see the patch release notes)

As shown in this source code diff, this class just defines the original “dimensions” in Analytics that include the document details:

This will cause your “export user details” report to change from this:

… to this:

Release notes for the patch after the break…

WebCenter Analytics 10.3: Hibernate 3.0.5 and Cewolf 0.10.3

Sunday, January 30th, 2011

I’ve had a couple people ask about this, so I figured I’d make it easy for you the next time you run into the issue.

WebCenter Analytics now requires you to have the binaries for Hibernate 3.0.5 (for DB access) and Cewolf 0.10.3 (for charting functionality).  Unfortunately, Cewolf 0.10.3 is no longer available for download, and the installer throws warnings about the checksums not matching if you use a later version.  In fact, I’ve had difficulties with getting Analytics to work with the newer version after disregarding the error, and it’s best to just use the older versions that Analytics wants if you ever plan on contacting Oracle Support anyway.

The older binaries themselves aren’t too hard to find, but if you’d like to just grab them here, they’re all yours:

Again, use caution with any files other than these; otherwise you may end up with ugliness like this:

Bug Blog 9: WebCenter Analytics Timeouts

Wednesday, January 26th, 2011

In our last post, we touched on the unusual nature of BEA’s acquisition of Plumtree, and how BEA largely kept the portal product lines separate with ALUI and WLP.  But that’s not a completely fair assertion: BEA did have a longer-term goal, integrating the two portal “front-ends” through various back-end tricks (such as Ensemble and WSRP).  Similarly, while you may have read that post as a somewhat bleak assessment that my opinion is WCI is dying a slow, painful death, in reality Oracle has stated plans to provide integration services between the products through similar means.  So you could use a WCI front-end with integration through Ensemble and WSRP to other WebCenter Services such as Blogs and Wikis.

The reality, though, is that these types integrations take time – and sometimes, lots of it.  As evidence of this, look no further than Aqualogic Analytics.  When BEA acquired Plumtree, one of the gaping holes in WLP was Analytics, or usage reporting of the product.  Plumtree Analytics was becoming a solid product, but it was very tightly integrated into the Plumtree portal.  So the decision was made to try and abstract some of the major pieces out, with the thinking that these abstractions could be useful elsewhere.  For example, the ill-fated Security Services (once used by the also ill-fated PEP line and now just built into Analytics) and the existing Directory Services came of this integration attempt.  The idea was that by abstracting out security and user management:

  1. These services would be available to other applications that were developed down the line
  2. Analytics would be more compatible with WebLogic Portal, which also had an LDAP repository to access user and group information

While I think that if more time had been available for the integration to become more seamless, the problem is that no one won in this attempt because it was aborted too early.  I have no idea whether WLP still supports Analytics integration, the old Security Services are now just built into the product as a phenomenally complicated set of DB tables that make little sense, and Directory Services are a dramatically inefficient way to access user and group information.

Case in point – I’ve had a couple of clients report Analytics timeouts for some users, but other users were seeing the proper report:

How does this relate to the whole Plumtree/BEA/Oracle integration saga? The old Plumtree Analytics application used a SQL query called something like QUERY_USER_FLATTENED_GROUPS.  Basically, this was a Plumtree Portal-specific SQL query that, given a user ID, would spit out all nested groups that that user was a member of.  So if a user was a member group A, and Group A was a member of Group B, the query would return both Group A and B.

The ALUI version of Analytics, though, utilized Directory Services so that group membership didn’t need to come from PT-specific SQL queries.  It could come from generic LDAP queries.  The problem is, LDAP doesn’t have a mechanism like QUERY_USER_FLATTENED_GROUPS, so for any given user, Analytics needs to query LDAP for the groups a user is in.  And then, Analytics needs to check to see which groups THOSE groups are a member of.  And so on, and so on.  You’d be surprised – through inheritance, you may be a member of thousands of groups, and rather than a single SQL query, you’re now dealing with tens of thousands of LDAP calls.

Bottom line:  Integrations and product convergence can work, but they’re phenomenally complicated, because every piece of abstraction added can cause unanticipated side effects.  Which is probably why Oracle is taking this whole process pretty slowly.

Full Bug Report after the break for your convenience. (more…)

The Sad, Sordid Story of WCI Analytics and Its Three Critical Fixes

Sunday, January 2nd, 2011

Back in April, I wrote a WebCenter Interaction Patch and Hotfix Round-up. Since then, a couple of additional critical fixes have been released for Analytics. And you probably need them.

If, after upgrading to WCI Analytics, you’re not seeing any new events being recorded, check any one of your Analytics Views in your database. Chances are, you’re seeing a NULL value for VisitID:

Oracle has released three separate critical fixes trying to get this finally working, and from my experience you need to install the last two listed in the table below.

The real problem is, this isn’t just a matter of swapping out files, since for each month that passes before you notice this problem, more tables will continue to have these invalid values. And Oracle’s fix in the last patch set is FAR from ideal if you want to get those VISITIDs recreated. I won’t bore you with the details (trust me, you’ll be bored enough running this crazy fix), but suffice it to say that there are no less than 15 steps. And during those steps, potentially dozens of SQL scripts are generated (depending on how many months you need to created). For each month that you need to restore, you have to stop the collector, run a couple scripts, start the collector, then let it run a while to recreate everything. Wait a while, verify the number of untagged events you have, repeat:

How long does it take? Quoting from the Readme file: “Tagging old event data can take days or weeks to complete depending on the volume of your event records. Internal tests repaired the data at a rate of 150k to 300k events per hour.”. DAYS or WEEKS!?

In practice, I’ve run through each of these in about a day, going back about 6 months on moderately sized installations. But you should plan on getting these updates in there sooner than later, because the longer you wait, the longer it’s going to take you to repair all those busted tables. (more…)

Redux: WCI 10gR3 Installer Errors

Wednesday, November 3rd, 2010

Another Rock Star in the WebCenter Interaction consulting industry, Bill Benac, wrote a blog post years ago, describing a problem with the WebCenter Interaction 10gR3 installers.  I hadn’t worried about it for a long time until it bit me in the ass – after dozens of successful installs and upgrades of the WCI portal, I had never seen the problem he reported.  The problem as he described is that sometimes a portal installer chokes and displays some error like:

Serious errors occurred during your installation.  Click OK and then click through to the end of installation to complete installation and then look at log for WebCenter Interaction in …

Recently, the same error bit me during an ALUI upgrade, and I saw pretty much the same error in the portal, Collaboration Server, and Analytics.  The errors seemed benign so I just ignored them until I realized that the WebCenter Analytics installer hadn’t created the Analytics Collector Service.

It turns out – and I have no idea why I’d never come across this issue with other installs and upgrades – that the WCI installers look for free memory on the host machine.  In some (unknown and unusual) circumstances, it can’t query the Windows OS for free memory, so it defaults to 0.  But 0GB of free RAM is less than what it needs, so the installer chokes.  In Collab and the Portal, the error is at the end of the installation process, so it seems pretty benign, but for Analytics, it gets thrown before the services are created, so you’re boned unless you fix it.

As for fixing it, check out Bill’s Blog Post, but the gist is that you need to set a fixed amount of Virtual Memory to avoid an error like… (more…)

Oracle Support Master Notes and Webinars

Saturday, October 2nd, 2010

I’ve been critical of Oracle Support in the past, but recently had a great experience with some of the old Plumtree support buddies that are still around – specifically, Merrick Huang in Oracle Support was able to provide a tremendous amount of assistance on a very thorny search issue I was having at a client site and will be writing about here in upcoming posts.  Before we get into the nitty gritty of that problem, I want to share with you a great resource I didn’t know existed until now: Oracle Support Master Notes and Webinars (login required).

The purpose of “Master Notes” is to “provide the most important links that users will need to install and support the product”, and there are some pretty decent pages in there if you know where to look.  For example, the IDK Master Note is a collection of a bunch of documentation, KB articles, known issues, and bug fixes all in one place.

But what I really wanted to highlight here is the Webinars provided by Oracle Support – with one in particular being the best Oracle Webinar I’ve seen: the Search Webinar, by Eno Gjerasi.  Eno shows that there’s still life left from the Plumtree support group, and demonstrates a level of knowledge of the Search Server that rivals most engineers or consultants.  There was one tip in particular that I’ll focus on in upcoming posts (about how to communicate directly with Search), but I encourage you to check out all three Webinars (Search, Portal / SSO, and Analytics) and the other Master Notes – you may just find a gem in there and wonder how you made it all these years without knowing “that one thing” you never knew you needed.

Keep up the good work, Oracle support!