Use a BigIP iRule to further defend against ShellShock
October 4th, 2014 by mattc

By now you’ve no doubt heard about ShellShock, and have quickly worked to patch all your systems to close the most vulnerable aspects of this pervasive exploit. You may even be aware that some users are reporting that even the patch hasn’t fully closed the vulnerability (it seems that while the patch prevents execution from arbitrary code execution, aliasing commands is still possible).

The exploit is pretty simple to execute; the user-agent header here will write the text “HACKED” to a file named hack.txt in the /tmp directory on a vulnerable server:

GET /cgi-bin/anypage.html HTTP/1.1
Host: yourhost
User-Agent: () { :;}; echo HACKED >>/tmp/hack.txt 
Accept: text/xml,application/xml
Accept-Language: en-us

So, in addition to patching your servers, if you’ve got a BigIP server in front of your systems, you can also set up an iRule on your system to prevent the traffic from even getting through to your servers by looking for those characters ( “(){” ) in any of your headers:

The details of the iRule are posted on F5’s forum and F5 even maintains a dedicated up-to-date ShellShock information page. Basically there are two versions of the iRule; one that trades off a tiny bit of performance to log the attack attempts, and one that’s designed to be slightly more performant but lacks logging.

Sure enough, within minutes of applying this iRule to our front-end servers at a client site, we started seeing attack attempts in the BigIP logs. So be warned: the bad guys are out there and they’re actively exploiting this bug, so do everything you can to secure your systems!
Read the rest of this entry »

Hidden reset flag for WebCenter Interaction Analytics
March 25th, 2014 by mattc

Recently I worked on a migration from some WebCenter Interaction servers to a virtual environment (you’re still using physical servers? How 2013…).

After the migration, there were some documents that weren’t showing up in WebCenter Analytics reports. As you probably know, Analytics events are stored in the ASFACT tables using objectIDs, and information about those events are captured in the ASDIM (dimension) tables to record information about those objectIDs (it’s done this way so that, even if you delete the object in the portal, you don’t lose the record of its access). But during some extensive testing, logging, and code decompiling, I discovered some interesting things that may be useful to you:

  • You can dramatically increase the logging that the Analytics sync job writes by changing the log settings to DEBUG in %PT_HOME%\ptanalytics\10.3.0\settings\config\ This will give you details all the way down to the exact queries being run against the database.
  • The number of objects synched to the dimension tables, along with the results of the synch jobs, are stored in the ASSYS_JOBLOGS table – and that table is never cleared:
  • The Analytics Sync jobs that run nightly to update those dimension tables don’t actually look at all objects in the portal database; they look only look for objects that have been updated since the last time the job has run. This time comes from that ASSYS_JOBLOGS table.
  • There’s a hidden flag in the source code for the job called “-reset” that clears this job log table for that particular job entry, causing all objects for that job type to be re-scanned:

Bottom line: if some of your Analytics reports don’t seem to contain all of the objects you’re expecting, it’s possible that events have been recorded but the dimensions simply don’t exist for them. If that’s the case, you can resolve this by adding -reset to the external operation in the WCI portal. Just remember to remove the flag after the job runs so that you’re not constantly clearing the logs every night and generating unnecessary load. Read the rest of this entry »

Use FireFox 3D view to diagnose CSS issues
January 5th, 2014 by mattc

It’s been a while since our last post as I’ve been busy with my other blog,, but this little gem was too neat to pass up (thanks Aseem for the tip!).

When diagnosing complex CSS or HTML issues with multiple layers, there’s a nifty little 3D view built into FireFox that allows you to rotate around all the various layers, inspecting the elements that may be causing you problems.

Simply hit F12 to bring up the debugger pane (which, incidentally also opens the dev tools in IE and Chrome as well), then click the “3D View” button:

Server-Side Validation – Deleting Knowledge Directory Cards
April 29th, 2013 by mattc

Those of you that are familiar with the WebCenter Interaction Knowledge Directory have likely had a need to delete a batch of cards from a folder. But, if you have a lot of cards and try to delete more than 50 at a time, you’re presented with this little gem:

That’s a Javascript popup that refuses to allow you to delete more than 50 cards at a time – but what if you have 25,000 cards that you need to delete? This scenario presented itself to me recently, and I was definitely not looking forward to the prospect of deleting them 50 at a time – 500 separate deletions, which is more complicated by the fact that the Knowledge Directory shows only 20 cards at a time, so for each deletion you have to change the pagination as well.

Importantly, there is a difference between “client-side validation” and “server-side validation” – doing a check in Javascript to prevent users from doing something is completely different than doing a check on the server side for the same reason. Fortunately for me the original developers neglected to do a server-side check that would prevent the deletion of more than 50 cards. So, using the IE Developer Tools (hitting F12), I was able to add a breakpoint at this Javascript condition, and simply use the “Set next statement” option to get this method to return true:

That way, I could get the browser to submit deletion requests for 1,000 or more objects at a time.

Use this as a reminder as you develop your custom code: always confirm data entry both on the client and server sides if you don’t want users doing things they shouldn’t!

Understanding SQL Injection Vulnerabilities
April 25th, 2013 by mattc

Every now and then I get a report from some security auditor that Plumtree (or ALUI or WCI) has a “SQL Injection Vulnerability“. While this blog has seen more than one security-related post, SQL Injection is not a likely attack vector.

The reason for this is simply that WebCenter Interaction (and BEA ALUI before that, and Plumtree before that – this thing has always had a very solid foundation!) uses PREPARED STATEMENTS (and, to a lesser extent, STORED PROCEDURES). The above wiki post describes how SQL injections work, and this post describes things exceptionally well:

You can either use BAD SQL that exposes the application to SQL injection:

Statement stmt = conn.createStatement("INSERT INTO students VALUES('" + user + "')"); stmt.execute();

… or you can use GOOD CODE to avoid it:

Statement stmt = conn.prepareStatement("INSERT INTO student VALUES(?)"); stmt.setString(1, user); stmt.execute();

Now, lets say that a malicous end user enters a value of ‘Robert’); DROP TABLE students; — for their user name.

The first example would run this SQL Statement:

INSERT INTO student VALUES('Robert'); DROP TABLE students; --) 

… which would immediately delete all your students.

The “Good Code”, though, would simply insert a value of “‘Robert’); DROP TABLE students; —” into the “students” table. Not perfect, sure. But at least your database would be protected from end users being able to run SQL against your database!

Guess which type WebCenter Interaction uses? If you guessed the latter, you’d be right. And you can move on from claims of “SQL Injection Vulnerabilities” – there’s nothing to see here. Of course, there’s plenty to be seen elsewhere, but that’s another story!

Cool Tools 26:
April 20th, 2013 by mattc

This is more of a consumer-grade type of site that I’d recommend to everyone – including my parents when they complain about the “Interwebs being slow”. But, if your IT shop is promising specific Internet bandwidth for your portal servers, there’s nothing to stop you from RDP’ing into your server and navigating to to get a “second opinion”.

It is pretty laden with ads, but they aren’t too distracting. And it does require that virus called Adobe Flash, which isn’t always (and shouldn’t be!) installed on servers. But, if you’re dealing with performance issues that feel like they’re related to the network, and you’ve tested internal network connections, it can be worth temporarily installing Flash.

For example, I’m pretty sure our cloud hosting provider guarantees 10MBps both ways… so I should get on them with these results!

It is worth pointing out that despite the joke about getting on our cloud provider, shares your Internet connection with every other connection at any given point in time. So just because the above screen shot shows that this machine is only downloading @ 6.31Mbps, that doesn’t mean that the pipe to the Internet offered by our hosting provider isn’t providing 10Mbps. It’s possible that other machines in our infrastructure are burning bandwidth too. And, since this is a production environment, I would HOPE at any given point in time there is activity on our network as pages are served from the portal.

Give it a shot – even if you’re sitting in front of your work computer at this very moment. You may be surprised about the relative speed differences between your home and office.

One surprising little fact is that I pay about $100/month for about 100MB/s from Comcast. But most commercial hosting providers charge up to 10x the cost for 1/10th the speed. Really – that’s a 1000x markup! The difference, of course, is that Comcast doesn’t GUARANTEE these speeds – or even availability. So you couldn’t run a real production web site off Comcast, since it is occasionally down or under-performing. Still, it’s food for thought: at the very highest service levels, costs increase exponentially. Same thing with the “Five 9’s” mandate – but that’s a blog post for another day…

Shrink and Clean Your Virtual Machines
April 15th, 2013 by mattc

I’m a rabid advocate of VMWare Workstation (and Server for that matter!). I maintain a separate virtual machine – and sometimes more than one – for each client I’ve worked with over the years. This has many advantages, including the ability to pull an old VM out of the archive for a returning client, the ability to install different versions of the portal to match the client’s configuration, and the capability to run different VPN software on the same host machine at the same time (because each VM maintains its own network stack and VPN software configuration).

But having 10-20 VMs can be a bit painful to keep up-to-date and to keep the file sizes down. While I still haven’t figured out how to apply all Windows Updates via command line, each of my VMs now has a batch file that allows me to compress and optimize disk space for each of the VMs that I have. So, once a month I log into my VMs, run Windows Update, and run the below batch file to shrink things up.

You can download the cleanup.bat file, or just copy/paste from the below code. Basically, the script performs the following activities:

  1. Delete the files in c:\windows\temp
  2. Delete the temporary Windows Update files in c:\ windows\ SoftwareDistribution\ Download\
  3. Delete the IE update files
  4. Run the Disk Cleanup Utility (Note: you need to do a one-time run of the command “cleanmgr /dc /sageset: 1” to set your prefs – see the Microsoft KB Article on the topic.
  5. Empty the recycle bin
  6. Defrag the drives
  7. Run the VMWare drive shrink utility
  8. Shut down the machine

So, do you have any optimizations that you think could be added to this script?

Read the rest of this entry »

Do you use WebCenter Interaction Wikis or Blogs?
April 10th, 2013 by mattc

Those of us who go way back with Plumtree remember PEP (Pages, Ensemble, Pathways), which were pretty much futile attempts to break into more “modern” technologies like Blogs, Wikis, RSS, and other buzzword-worthy tech after “The Portal” had been conquered. It didn’t go well, but we won’t dwell on that. We all make mistakes.

I had planned on doing a review of WebCenter Interaction Collaboration Server in Neo, a.k.a. 10gR4. But honestly, there’s not much you need to know, and most of you have moved on anyway. There is now some UCM integration (which, of course, is already re-branded) and what looks like some Excel functionality that honestly I haven’t even looked at but probably fixes some issues with the late-90’s era Excel Portlet.

Oh, and it looks like they re-signed those WebEdit and Upload controls:

But what frankly came off as almost insulting was the “Blog” and “Wiki” functionality that’s now included. I’ll just drop two hints: Atlassian Confluence and WordPress. Don’t waste your time with Collab.

For the morbidly curious, I’ve included some screen shots after the break.
Read the rest of this entry »

Rebuilding WebCenter Collaboration Index By Project
April 4th, 2013 by mattc

Another tip of the hat to Brian Hak for this pretty awesome Hak (see what I did there?).

Last year, Brian was faced with a problem: Some documents in WebCenter Interaction Collaboration Server weren’t properly indexed, and his only option was the rebuild the ENTIRE index (a pain we’re all pretty familiar with). With many thousands of documents, the job would never finish, and end users would be frustrated with incomplete results while the job toiled away after wiping everything out.

So he took it upon himself to write CollabSearchRebuilder. CollabSearchRebuilder allows you to rebuild specific subsets of documents without having to wipe out the whole index and start all over again.

Feel free to download the source and build files and check it out!
Read the rest of this entry »

Cool Tools 25: LAN Speed Test
March 30th, 2013 by mattc

Sometimes the coolest tools are the simplest ones.

Today’s Cool Tool is simply called LAN Speed Test, and the tool pretty much does just that – it tests the speed of various connections in your LAN. It does this by simply writing a 20MB file (configurable) to a file share, reading it back, and timing how long the transfers took.


The use case for this was pretty simple: our WCI Portal machine (with an alias of PT-PORTAL) was in a DMZ, and the back-end servers (in this case, PT-INTEGRATION and PT-COLLAB) were on a separate sub-net. The NT Crawler web service was acting very strangely, failing to serve up files through the portal, but serving them up locally just fine.

So, using LAN Speed Test, I was able to confirm (and prove to the network team) that the problem was in the switch/firewall connecting the devices. Notice in the above screen shot how PT-INTEGRATION was able to write and read a 20MB file to PT-COLLAB in about .24s and .28s respectively? And how writing the same data to PT-PORTAL was taking 20.6s and 2.2s, respectively?

Yeah, there’s the smoking gun…