In a previous post, I presented the very big lines of a proposal for a simple but yet (hopefully) powerful centralization of logs on a Documentum platform I’ve been now using for more than one year. If you haven’t read this first port, I’d recommend reading it.
This second part aims at giving you a practical preview – a taste – of the solution in short time. In short, the aim is to have your Documentum environment instrumented and real-time monitored in 30 minutes!
Yes, I know, quite a challenge for the busy people we all are :-). At the end of this post, you should be able to perform real-time webtop/JMS transactions monitoring and be able to decide whether or not the solution suits your needs and whether you want to move either a quick win deployment for development only or a complete deployment to your Documentum production environment. I could be wrong but I personally consider there is no solution on the market which would give the same level of real-time Documentum monitoring as the solution exposed here below, for the same level of effort, while still keeping a complete control over the implemented technology. I hope you will think the same way after this post.
OK, the clock is ticking… A small reminder of the solution proposed:
- logfaces as log server. Logfaces will receive the logs of all components of our Documentum platform.
- logstash to forward legacy log files (docbroker, docbase) to the logfaces server
- log4j logfaces appender to forward the logs of log4j-enabled components (webtop, jms, cts, acs,index server)
- a small servlet filter to trace webtop and JMS transactions
What is Logfaces? Logfaces is a centralized logging solution providing: a powerful interface based on Eclipse RCP, dedicated asynchronous appenders, a real syslog-compliant logserver and the corresponding high-end database persistence mechanism (with MongoDB as the recommended No-SQL database for high throughput environments). For those who are familiar with solutions like Logstash, ElasticSearch, Kibana, Graylog2, we are in the same scope of usage. I know this video is not related to Documentum but I think it gives a good introduction on when and how such tools are used.
In this post, we will focus on the instrumentation of the Webtop and JMS components. In order to reach our challenge we will also have to temporarily take a shortcut and get around the full installation of the logfaces server for the moment. The logfaces client has a very handy option: when starting it, you make it behave exactly as a server. For our first taste, we will use the server mode of the client, so that we won’t have to install a server for the moment. Download the logfaces client from the logfaces download page and install it on the machine you will display logs on. Once the client started, run it in Server mode and enable all server ports (Log4x TCP server, Log4x UDP server, Syslog TCP server and Syslog UDP server).
Prerequisite: the servlet filter we will deploy to webtop will provide limited information in case you deploy it to a webtop instance which has fail-over enabled (HTTP Session replication): all MDC additional info will NOT be available. In order to verify whether your webtop has failover enabled, check the settings defined for the “not portal” context in the \wdk\app.xml file. Note that enabling failover in an environment that is not supporting it (single node container, container not supporting failover) is useless, as stated in this WDK Deployment guide extract:
<failover> <filter clientenv='portal'> <enabled>false</enabled> </filter> <filter clientenv='not portal'> <enabled>false</enabled> </filter> </failover>
Webtop instrumentation using a servlet filter:
Step 0: Download the MDCFilter.zip file from the MDCServletFilter project on sourceforge and unzip it to the temporary location we will call “<unzipped>”.
Step 1: Copy the <unzipped>\mdcservletfilter.jar jar to the <webtop>\WEB-INF\lib folder of your webtop application.
Step 2: The servlet filter uses log4j MDC. 6.5.X, 6.6, 6.7.X versions of webtop use log4j 1.2.13, unfortunately this version is buggy when it comes to MDC. I can understand your potential reluctance to upgrade to a newer version but going from 1.2.13 to 1.2.17 for the usage webtop (or any other Documentum components make of log4j) is for me completely safe. I upgraded to 1.2.17 more than one year ago, no problem at all. Replace your log4j jar (<webtop>\WEB-INF\lib) with the 1.2.17 one from the <unzipped> folder.
Step 3: The servlet filter uses a configuration file. Copy the <unzipped>\webtop\MDCFilter-config.xml file to the <webtop>\WEB-INF\classes folder.
Step 4: Edit the web.xml file in the \<webtop>\WEB-INF folder and declare the MDCServletFilter filter and the corresponding filter-mapping by adding the following section just before the WDKController filter-mapping definition.
<filter> <filter-name>MDCFilter</filter-name> <description>Add MDC contexts dynamically</description> <filter-class>com.wordpress.stephanemarcon.filters.mdc.MDCFilter</filter-class> <init-param> <param-name>config-file</param-name> <param-value>MDCFilter-config.xml</param-value> </init-param> </filter> <filter-mapping> <filter-name>MDCFilter</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> </filter-mapping>
Step 5: Download the lfsappenders-x.x.x.jar from the logfaces download page and copy it to your <webtop>\WEB-INF\lib folder.
Step 6: Edit the <webtop>\WEB-INF\classes\log4j.properties file and add the following section to it. Set the log4j.appender.LFS.remotehost property to the name of the machine running the logfaces server (which is the logfaces client in our case). For example, if you are running your webtop and your logfaces client on the same machine, you may use log4j.appender.LFS.remotehost=127.0.0.1
# Custom appenders log4j.logger.com.wordpress=INFO, LFS #LogFaces appender log4j.appender.LFS=com.moonlit.logfaces.appenders.AsyncSocketAppender log4j.appender.LFS.application = DEMO-webtop log4j.appender.LFS.remoteHost = <logfaces server url> log4j.appender.LFS.port = 55200 log4j.appender.LFS.locationInfo = true log4j.appender.LFS.threshold = ALL log4j.appender.LFS.reconnectionDelay = 5000 log4j.appender.LFS.offerTimeout = 0 log4j.appender.LFS.queueSize = 1000 log4j.appender.LFS.backupFile = D\:/Temp/lfs-backup.log
OK, you can now start your webtop application and go to the login page. You should now be able to see a “DEMO-webtop” domain on the right-hand side of the logfaces client:
Right-click on this “DEMO-webtop” domain and click on “Create TRACE perspective”. As a small reminder, the log4j levels are the following: ALL > TRACE > DEBUG > INFO > WARN > ERROR > FATAL > OFF. Starting a “TRACE” view will enable the display of all logs forwarded to logfaces, and therefore the logs produced by the MDCFilter (which, by the way, are INFO-level logs).
This is pretty cool but not enough as the columns displayed only give the out-of-the-box information log4j events contain. Our MDCFilter adds more information to those. In order to display this additional information, the corresponding columns must be enabled. Open the File -> Preferences -> MDC Names menu and add the following MDC column names: Uri, ExecTime, UserName, Action, Arguments.
Now if you enable the display of those columns by clicking on the “Columns” button in the logs frame and selecting those in the Diagnostic context , you should see more contextualized logs:
Every time an action is performed, you should now see a lot more info, and actually enough information to tell what the user is performing as an action.
If you have a quick look at the MDCFilter configuration, you will understand the only thing it does is generating “INFO” level logs with information contained in HTTP session attributes, HTTP request attributes and parameters. That is actually all, nothing more. No logfaces-specific code here, we could have actually logged this information to a local log file but forwarding those to the logfaces interface makes this information much more usable.
If you use either the DfLogger or a basic log4j logger is your own code, you can also forward those to logfaces, to ease your development for example, using the same appender or another one.
We are here focusing on webtop transactions logging but you are certainly currenty understanding you may add configurations to forward all webtop error-level logs to logfaces which can be of great help.
JMS instrumentation using the custom servlet filter:
OK, we are half way to succeeding in our challenge, :-). Let’s instrument the Java Method Server so that it will forward info about what is happening on it. You will notice those steps are very similar to the ones we used for instrumenting webtop.
These steps are valid for Documentum 6.X environments but the actual instrumentation should be the same on both older and newer versions. Nevertheless, it is difficult to test those on all Documentum versions and I unfortunately do not have the time to generate a decent installer for those.
Step 0: Connect to your Content Server Step 1: Stop your JMS
Step 2: Download the lfsappenders-x.x.x.jar from the logfaces download page and:
- copy it to your <documentum>\jbossX.X.X\server\DctmServer_MethodServer\lib folder.
- copy it to your <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\lib folder
Note that this will make it available for further use by other components like acs/bocs
Step 3: Replace the log4j jar contained in the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\lib with the 1.2.17 one in the <unzipped> folder.
Step 4: Edit the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\APP-INF\classes\log4j.properties file and add the following section to it. Same thing as for webtop, set the log4j.appender.LFS.remotehost property to the name of the machine running the logfaces server.
# Custom appenders log4j.logger.com.wordpress=INFO, LFS #LogFaces appender log4j.appender.LFS=com.moonlit.logfaces.appenders.AsyncSocketAppender log4j.appender.LFS.application = DEMO-JMS log4j.appender.LFS.remoteHost = <xxxx logfaces server url xxxx> log4j.appender.LFS.port = 55200 log4j.appender.LFS.locationInfo = true log4j.appender.LFS.threshold = ALL log4j.appender.LFS.reconnectionDelay = 5000 log4j.appender.LFS.offerTimeout = 0 log4j.appender.LFS.queueSize = 1000 log4j.appender.LFS.backupFile = D\:/Temp/lfs-backup.log
Step 5: Copy the <unzipped>\mdcservletfilter.jar and <unzipped>\castor-1.1-xml.jar files to the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\lib folder. Step 6: The servlet filter uses a configuration file. Copy the <unzipped>\jms\MDCFilter-config.xml file to the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF folder. Step 7: Edit the web.xml file in the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF folder and declare the MDCServletFilter filter and the corresponding filter-config by adding the following section just before the DoMethod servlet definition. !!! Be careful to specify an absolute path to the MDFFilter-config.xml file !!!!
Example path: D:/Documentum/jboss5.1.0/server/DctmServer_MethodServer/deploy/ServerApps.ear/DmMethods.war/WEB-INF/MDCFilter-config.xml
<filter> <filter-name>MDCFilter</filter-name> <description>Add MDC contexts dynamically</description> <filter-class>com.wordpress.stephanemarcon.filters.mdc.MDCFilter</filter-class> <init-param> <param-name>config-file</param-name> <param-value><em>absolute_path_to_MDCFilter-config.xml</em></param-value> </init-param> </filter> <filter-mapping> <filter-name>MDCFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping>
Step 8: You can now start your JMS server.
If you followed all the above steps, you should now be able to have one line for each call to the Java Method Server, so one line to each Java Method execution.
NOW SOME REAL-TIME FULL-COLOR CONTEXTUALIZED WEBTOP TRANSACTIONS MONITORING
Yes, I know… I’m not good at finding teasing headings…
We start having something that starts to be interesting at this point. The servlet filter we added on Webtop sends its INFO level messages to our logfaces real-time view, giving some interesting information about all relevant requests handled by our application. On the other side, you should now also be able to see logs forwarded by the servlet filter we deployed to our JMS server which is actually extracting contextual information from the requests sent to the DoMethod Servlet (which, as you know it, is the actual engine of the JMS Server).
Let’s see how we can tune the logfaces viewer for real-time transaction monitoring. Logfaces has a very interesting functionality called “tags“. It permits the visual tagging of log events in the interface based on user-defined criteria.If you followed all the steps of webtop instrumentation, you will be able to define for example a tag for the checkout operation in the File -> Preferences -> Tags.
And you could be able to end up with this sort of colorful usage of logfaces “tags” (if you are interested in getting those exact tags definition which I made for this tutorial, please post the request and I will add those somewhere).
CONCLUSION of part-2
As I was explaining in my first post, there are two cases where this logs centralization solution may be used:
- as a tool for a pure monitoring a Documentum platform and the aim of this Part-2 was focusing on a giving a real life introduction to this use case. Depending on the feedback/comments I will get about Part 2, I will write a third dedicated post which will give further details about howto instrument the other most commonly used Documentum components (ACS, Content server processes, docbrokers, CTS and indexing server). Logfaces Syslog compatibility / email alerting / built-in reporting may also be used to support such a use case.
- as a tool to ease the day-to-day work of developers. There would be a lot more features to present on the logfaces solution itself to support this use case (logs-to-file export, source code linking,…). Depending on the feedback/comments I will get on this post, I may write dedicated posts to describe those.
- as a tool to share logs with other departments or teams, in case your Documentum platform provides services to other systems. Logfaces has a nice functionality which permits the definition of security over log events. Imagine you provide services to other systems (using DFS for example), you may for example decide to give access to a part of the corresponding logs to the calling systems’ support teams. In many cases, giving such a direct access to logs may save you a lot of time, helping each side of the interface quickly identify where a potential problem sits.
A full reading of the logfaces manual would not have helped me fit into the 30 minutes challenge but we are now reaching the end of the post so you may refer to it to get a deeper understanding of logfaces. I hope my post helped you understand the tool but there is nothing better than a real good old manual.
When it comes to such large-range solutions, it is quite difficult to write helpful detailed material on those. I hope the aim of this post is reached and it gave you a way to investigate on whether logfaces can or not be a candidate for your environment. I have always been a big fan of tools which have great usability/cost and quality/cost ratios. Logfaces is definitely one of those and fully justifies its (actually very low) price, even if you only consider its single benefit to development productivity. We all use interesting tools/technologies we spent time to experience with, select and integrate into our architecture, ideally at the lowest cost. My aim is to post other “reviews” of tools which I would strongly recommend, hopping this can be of any help for others.