Archive for the ‘Coding Tricks’ Category

WebCenter Interaction Debug Space

Saturday, October 22nd, 2011

Years ago I posted about a little-known MemoryDebug Activity Space in the Plumtree portal (or were we already calling this thing AquaLogic or ALUI by then?).

Recently I found a somewhat “meta” page with an activity space name of just “Debug” that links to this MemoryDebug space and two other useless pages. I won’t get into the gory details here, but I stumbled across this when dealing with some code related to varpacks (we’ll get to all those gory details in due time).

The gist for this post is that you can not only view the debug space in your portal by setting the space to Debug (/portal/, but there’s also a Friendly URL for this space: /portal/

In fact, all three of these debug pages have friendly URLs

  • portal/ – a useless ‘help’ page
  • portal/ – the memory debug page, largely only useful for reloading varpacks
  • portal/ – the useless page you can access through Admin: Select Utility: System Health Monitor

This isn’t an entirely earth-shattering discovery, but as I’ve been revisiting this debug space and friendly URL configurations, I’ve made some other interesting discoveries that I’ll post about soon. In the meantime, try out those friendly URLs in your portal and marvel at the completely hidden piece of functionality you never wanted or needed.

Debugging JavaScript in IE, FF, and Chrome

Thursday, October 13th, 2011

Javascript debugging can be tough – particularly when dealing with the multiple tiers that the WebCenter Interaction portal pulls code from, JS can come from different locations and get combined on one page, resulting in name collisions and other subtle problems when the portal page is rendered in the browser.

Sure would be nice to step through that JavaScript code in IE Developer or Firebug, right? Sure, you can set breakpoints in both of those tools, but what if you don’t know exactly where to set it?

Simple, just use this line in your Javascript code:


Whenever your debug window is open, this command will instruct the tool to break immediately, which will allow you to monitor variables, set other breakpoints, and step through the code. Bonus tip: whether you’re using Internet Explorer, Firebug, or Chrome, the F12 key will bring up the respective “debug window” in that environment.

One thing to keep in mind with this command is that it breaks the code immediately. So, in the above example, the breakpoint is hit before the DOM even loads. In this case, you may be scratching your head wondering why some elements in the DOM or variables in the script aren’t being displayed properly in the Watch window. That’s because if the JS or HTML comes after your breakpoint, then the browser wouldn’t have processed it from the DOM yet.

Portal API Search Sample Code: IPTSearchRequest

Tuesday, June 14th, 2011

Today’s post is a quick code snippet from Integryst’s LockDown product (which relies heavily on search to identify objects for reporting on security configurations), and provides a pretty good sample of how to search for objects in the Plumtree / ALUI / WebCenter portal using the Search APIs. Hope you find it helpful!

See the docs for IPTSearchRequest and PT_SEARCH_SETTING for more options in developing your own search killer app.

// get the folder ID to load the grid data
string searchQuery = Request.Params["query"];

if (searchQuery == null)
return; // do nothing if there's no query

ArrayList userGroupResults = new ArrayList();
int classid, objectid;
string classname, objectname;

// get the portal session from the HTTPSession
PortalSessionManager sessionManager = PortalSessionFactory.getPortalSessionManager(Session, Request, Response);
IPTSession ptSession = sessionManager.getAPISession();

//search for users and groups that match this query
IPTSearchRequest request = ptSession.GetSearchRequest();

//turn off requests for collab/content apps

// Restrict the search to users and groups

// Restrict the search to specific folders

// make sure the appropriate fields are returned.

// set search order
IPTSearchQuery query = request.CreateBasicQuery(searchQuery + "*", "PT" + PT_INTRINSICS.PT_PROPERTY_OBJECTNAME);

IPTSearchResponse results = request.Search(query);

UserGroupObject tempRes;
int numUsersGroupsFound = 0;

// iterate the results
for (int x = 0; x < results.GetResultsReturned(); x++)
	objectname = results.GetFieldsAsString(x, PT_INTRINSICS.PT_PROPERTY_OBJECTNAME);
	objectid = results.GetFieldsAsInt(x, PT_INTRINSICS.PT_PROPERTY_OBJECTID);
	classid = results.GetFieldsAsInt(x, PT_INTRINSICS.PT_PROPERTY_CLASSID);
	classname = GenericObject.getClassNameFromID(classid);

	// search filter doesn't seem to work; make sure the classid is user or group
	if ((classid == PT_CLASSIDS.PT_USER_ID) || (classid == PT_CLASSIDS.PT_USERGROUP_ID))
		// do stuff

Collab Office Tools – “Don’t Show Again”

Monday, February 7th, 2011

When working on self-signing the Plumtree Collaboration WebEdit applet to prevent the certificate warning, I had to install WCI Collaboration Server on my server.  As I usually do, I made a backup of the collab folder, ran the install, and used Beyond Compare to diff the results.  Interestingly, I found this little gem in the collab.war file in /docman/editor/officeToolsInstall.jsp:

Wha-?  Why would 10.3 have a cookie expire on 1/1/2020, and have the same cookie expire on 1/1/2010?  Who knows, but what it likely means is that te “Do not show this again” checkbox in the Collaboration Office Tools popup will never work, because it’s setting a cookie that’s perpetually expired.

The solution?  Install that Collaboration Server IE8 Critical Fix, or just crack open the .war file and update /docman/editor/officeToolsInstall.jsp to use 2020 for the cookie expiration date.

There’s a WCI App For That 6: User Auth Source Flipper

Monday, January 10th, 2011

I recently worked on a project with a client that was migrating from LDAP to Active Directory.  Because we didn’t want to lose all the group memberships of the existing users, or any of their history (such as Collaboration Server authors), we needed to come up with something a bit more… creative… than just creating a new Authentication Source and telling users to start using the new accounts.  Kenan Shifflet wrote a great post about migrating Plumtree Authentication Sources a while back, but I took a different approach because we were going from LDAP to AD, and the GUIDs and CRCs were all different.  In fact, the only thing that was the same between the two authentication sources was the login names.

Swapping the Auth Source IDs would have resulted in each of the users getting deleted and recreated, since these GUIDs didn’t match.  But by swapping the OBJECT IDs of corresponding users, we were able to preserve all group membership and security.  Why did this work?  Well, in the PTUSERS table, all user objects have an Object ID, a mapping auth name, and other values that allow the respective Authentication Web Services to match a user to the source repository, whether it’s LDAP or AD.  But in every other portal table, only the Object ID is used for things like security and group membership.

So, for example, let’s say I have an mchiste account in LDAP that’s been fully configured; I’m a member of a bunch of groups, I’ve uploaded documents to Collaboration, and my user ID is in the Access Control List for various portal objects.  Then we run the AD Synch and there’s now a new mchiste account, but it doesn’t have any of that configuration associated with the old user.  If I just swap the object IDs for the two users, then all of a sudden my AD account will has all the correct group memberships and security settings, and the LDAP one looks like it’s brand new.

That’s exactly how User Auth Source Flipper works – it matches users from two authentication sources, then swaps out the ObjectID if there’s a match:

Got an idea for an interesting app?  Interested in developing your own Auth Source Flipper?  Give us a shout.

Oh, and “There’s a WCI App For That” can’t possible be confused with “There’s an app for that“, right?

There’s a WCI App For That 5: SearchFixer

Monday, October 18th, 2010

We’ve discussed a tiny bit about Knowledge Directory cards and how the WCI Search Update plays into the crawler ecosystem, and seen that it’s possible to directly query the WebCenter Search Service, so how ’bout a quick real-world application example, expanding both of those concepts?

Here’s the scenario:  I had a client that was showing discrepancies between “Browse” and “Edit” modes in the ALUI Knowledge Directory, and in Snapshot Queries.  I suppose I owe you all a more detailed explanation of these topics – which I’ll put up in a couple of days – but for the purposes of this article, suffice it to say that the “Search Index” and “Database” were mis-matched, and the WCI search index didn’t match the database.  Worse, the regular method of repairing this discrepancy (using the Search Update job after scheduling a Search Repair) wasn’t working.

So, to fix this issue, I developed another quick and dirty application that enumerated all folders in the Knowledge Directory, doing a search for cards within the folder, then querying the database.  The application would then compare the results, and if they were different, would allow the admin to “fix” the problem by deleting all cards from the Search Index for that folder.  When the Search Repair job next ran, it would re-create these entities without all the extraneous records in there.

Like this post, I’m not particularly proud of the code as a well-architected solution, but it works and I’d be happy to help you out if you want to get in touch.  Some of the relevant code is after the break. (more…)

Sorting News Articles in ALUI Publisher

Thursday, October 14th, 2010

Out of the box, WebCenter Interaction Publisher has a News portlet template that allows Content Managers to create News Articles, and display them in a summary portlet with a link to see the entire list of articles.  The articles themselves are:

  1. stored as Content Items under the -article_path-/ Articles folder,
  2. created based on the templates in /Portlet Templates/ _NEWS/ en/, and
  3. rendered by the “Main Page” (the portlet itself showing the top n articles) and “Index” (the list of all news articles when the user clicks “more”) Presentation Templates.

The problem is, the articles are listed based on when they were published, not when they were created or modified.  Which doesn’t make sense all the time – what if someone goes in and publishes the entire folder?  You’d end up with all news items showing up on the same day.  The fix here is to update the two Presentation Templates mentioned above to sort and display on when the “article” Content Items were modified, not published.

What you may not know is that when a user creates a portlet from a Publisher template (such as the one in /Portlet Templates/ _NEWS/ en/), Publisher creates a COPY of ALL OBJECTS into the new Publisher folder the Content Manager specifies when creating it.  The implication here is that you not only need to apply these fixes to the Content Item TEMPLATES in Publisher, but also each individual News Portlet independently.  (Or, you could use something like PublisherManager, but that’s another story entirely).

However you do it, the changes that need to be made can be found after the break. (more…)

There’s a WCI App For That 4: CardMigrator

Tuesday, September 28th, 2010

In my last post, I talked about the need to update both a cards’ location AND its CRC in the WebCenter Interaction database to migrate cards from one UNC path to another.  Today’s post is about an “App For That“, which is a utility I had written last year but essentially abandoned until Fabien Sanglier’s great tip about the CRC value needing to be changed.

The app is one of those “thrown together” .NET jobs where I was more focused on the need to update tens of thousands of cards for a client, rather than building a pretty and usable UI.  As such there isn’t a whole lot of error checking, and I’m not comfortable sharing the whole code base here – mostly because I’m just embarrassed about how it was hacked together.  But, if you’ve got a need for something like this, drop me a line and hopefully I can help you out or just send you the code as long as you promise not to make fun of me :).

The code is pretty straight-forward:

  1. After entering the connection strings for the API and the DB (since, as mentioned, we haven’t yet found an ALUI / WebCenter API to make the change to the CRC), you click “Load Crawlers”. 
  2. The crawler list shows up in the tree on the left, grouped by Content Source since you’re likely only updating cards based on the NTCWS, and not WWW or Collaboration Server crawlers. 
  3. Clicking on a crawler shows you all the cards associated with that crawler, as well as a bunch of useful metadata. 
  4. From there, you can do a search and replace on the UNC paths for all the cards.  The update process uses the API and Database methods to update the cards and the crawler, so the next time the crawler or Search Update jobs run, no cards are updated since everything matches up – assuming, of course, you’ve already moved the physical files to the new location! 

Some relevant code is after the break; again, drop me a line if you’re looking for more.


Updating the Location of a Crawled Card in WebCenter Interaction

Friday, September 24th, 2010

Much has been written on Content Crawlers and Cards in Plumtree’s Knowledge DirectoryChris Bucchere has done an excellent writeup on creating custom crawlers, and Ross Brodbeck has done the same for cards in the Knowledge Directory.  In fact, as I re-read those two articles, I realize this post addresses open issues in both articles – how to change the location of a card, and what the Location CRC values are within a card.

In the spirit of giving credit where credit is due, today’s post is based an excellent tip I learned recently from Fabien Sanglier, who figured out this little gem long before I did, and I believe had even posted code on his ALUI Toolbox project.

First, a word on crawlers:  basically, WCI’s Automation Server just calls several methods in a crawler to perform the following (I’m heavily paraphrasing here):

  1. Open the root path specified in the crawler’s SCI Page and query for “containers” (a.k.a., folders).
  2. Query that container for all “documents” (a.k.a., cards, which don’t necessarily have to be files).
  3. Recursively iterate through each container and query for the documents within each.
  4. For each document found, query for document signature and document fetch URL.
  5. If the document signature or path has changed, flag the card as changed and refresh it (which could be metadata, file content, or security)

Later, the Document Refresh and Search Update jobs will also use that crawler code to keep track of whether documents have changed in the source repository (by checking the document signature), and whether the document has moved.  If the signature hasn’t changed, the card remains untouched:

Now, let’s say you need to change the path of an NT Crawler because you’re moving those documents elsewhere.  Normally, you’d just move the files and change your crawler’s root path.  The problem with this approach is that the crawler won’t be able to recognize these files as the same ones that are in the Knowledge Directory, because the path has changed.  Consequently, all cards will be deleted and recreated.  This may not be a problem, but if your Content Managers are like any other Content Manager since the Plumtree days, there will be a lot of portlets that link to these documents in their content.  These links will all be broken, because new cards mean new Object IDs, which are part of those URLs (even the “friendly” ones).

The (partial) solution?  Update the paths for the crawlers AND cards through the API, so that the next time the crawler runs, the portal isn’t aware of any changes and doesn’t mess with any of the already-imported cards because the signatures match up.

Here’s the rub, though: not only does the Automation Server check to see if a document’s SIGNATURE has changed (in an NT File Crawler, for example, the signature is just the “last-modified” date), but it also checks to make sure the document’s PATH has changed.  In other words, if a card has an internal path of \\oldfileshare\folder1\mydoc.doc, and you programmatically change the crawler AND the cards to use \\newfileshare\folder1\mydoc.doc, the cards will STILL get wiped out and crawled in as new.  This is because the portal maintains a CRC check of the old document path, so that if it changes, it knows it’s looking at a different document.

Unfortunately, there doesn’t seem to be a way to update this CRC value through the API, so you need to use a direct DB update to make the change.  Below is the code used to generate the CRC and the table where it needs to be updated.  In my next post, I’ll include a more comprehensive listing.

int crca = XPCRC.GenerateCRC64(strCardLocation).m_crcA;
int crcb = XPCRC.GenerateCRC64(strCardLocation).m_crcB;

DbCommand updateComm = oConn.CreateCommand();
updateComm.CommandType = CommandType.Text;
updateComm.CommandText = "update ptinternalcardinfo set locationcrc_a = " + crca + ", locationcrc_b = " + crcb + " where cardid = " + card.GetObjectID();

Treat Collaboration Server as a REST-based API

Thursday, August 5th, 2010

The IDK methods for Collaboration Server are terribly sparse – you can’t get calendar events, file sizes, or a whole bunch of other critical data that you may want if you were to actually embark on a mission to write a better UI for Collab (trust me, I have).  Sure you could try and use the woefully undocumented Collab API – I’ve shown you how to deploy the portal API in the past – but that’s a challenge in and of itself.

Instead, let’s look at an alternate approach:  use the Collab Server as a sort of REST API.  It’s not really, but the basic idea is that you use URLs in your code to directly call functionality in Collaboration Server to do certain tasks.  For example, say you want to add a Collaboration project to a page programmatically; there is no mechanism to do this through the IDK, and we have no idea how to use the API, but using a header tool, we find that through Project Explorer, it works with a simple URL: /collab/do/project/selector/add?commPage=true&projID=COLLABID.

So, it turns out we can do the same thing programmatically, by using Java’s network libraries to call that URL directly (setting the proper authenticationid).  The code after the jump shows an example of how to do this; we use this approach in Integryst’s Automater, which allows you to script a bunch of actions at a time (what good is automatically creating a collab project if you can’t add it to a community page you just created!?). 

Tweak away!