Documentum Monitoring – platform logs’ centralization using Logfaces – Part 2


In a previous post, I presented the very big lines of a proposal for a simple but yet (hopefully) powerful centralization of logs on a Documentum platform I’ve been now using for more than one year. If you haven’t read this first port, I’d recommend reading it.

This second part aims at giving you a practical preview – a taste – of the solution in short time. In short, the aim is to have your Documentum environment instrumented and real-time monitored in 30 minutes!

Yes, I know, quite a challenge for the busy people we all are :-). At the end of this post, you should be able to perform real-time webtop/JMS transactions monitoring and be able to decide whether or not the solution suits your needs and whether you want to move either a quick win deployment for development only or a complete deployment to your Documentum production environment. I could be wrong but I personally consider there is no solution on the market which would give the same level of real-time Documentum monitoring as the solution exposed here below, for the same level of effort, while still keeping a complete control over the implemented technology. I hope you will think the same way after this post.

OK, the clock is ticking… A small reminder of the solution proposed:

  • logfaces as log server. Logfaces will receive the logs of all components of our Documentum platform.
  • logstash to forward legacy log files (docbroker, docbase) to the logfaces server
  • log4j logfaces appender to forward the logs of log4j-enabled components (webtop, jms, cts, acs,index server)
  • a small servlet filter to trace webtop and JMS transactions

What is Logfaces? Logfaces is a centralized logging solution providing: a powerful interface based on Eclipse RCP, dedicated asynchronous appenders, a real syslog-compliant logserver and the corresponding high-end database persistence mechanism (with MongoDB as the recommended No-SQL database for high throughput environments). For those who are familiar with solutions like Logstash, ElasticSearch, Kibana, Graylog2, we are in the same scope of usage. I know this video is not related to Documentum but I think it gives a good introduction on when and how such tools are used.


In this post, we will focus on the instrumentation of the Webtop and JMS components. In order to reach our challenge we will also have to temporarily take a shortcut and get around the full installation of the logfaces server for the moment. The logfaces client has a very handy option: when starting it, you make it behave exactly as a server. For our first taste, we will use the server mode of the client, so that we won’t have to install a server for the moment. Download the logfaces client from the logfaces download page and install it on the machine you will display logs on. Once the client started, run it in Server mode and enable all server ports (Log4x TCP server, Log4x UDP server, Syslog TCP server and Syslog UDP server).

logfaces_client_server_mode Once the client started, leave it opened.


Prerequisite: the servlet filter we will deploy to webtop will provide limited information in case you deploy it to a webtop instance which has fail-over enabled (HTTP Session replication): all MDC additional info will NOT be available. In order to verify whether your webtop has failover enabled, check the settings defined for the “not portal” context in the \wdk\app.xml file. Note that enabling failover in an environment that is not supporting it (single node container, container not supporting failover) is useless, as stated in this WDK Deployment guide extract:

WDK app Deployment guide

<failover>
  <filter clientenv='portal'>
    <enabled>false</enabled>
  </filter>
  <filter clientenv='not portal'>
    <enabled>false</enabled>
  </filter>
</failover>

Webtop instrumentation using a servlet filter:

Step 0: Download the MDCFilter.zip file from the MDCServletFilter project on sourceforge and unzip it to the temporary location we will call “<unzipped>”.

Step 1: Copy the <unzipped>\mdcservletfilter.jar jar to the <webtop>\WEB-INF\lib folder of your webtop application.

Step 2: The servlet filter uses log4j MDC. 6.5.X, 6.6, 6.7.X versions of webtop use log4j 1.2.13, unfortunately this version is buggy when it comes to MDC. I can understand your potential reluctance to upgrade to a newer version but going from 1.2.13 to 1.2.17 for the usage webtop (or any other Documentum components make of log4j) is for me completely safe. I upgraded to 1.2.17 more than one year ago, no problem at all. Replace your log4j jar (<webtop>\WEB-INF\lib) with the 1.2.17 one from the <unzipped> folder.

Step 3: The servlet filter uses a configuration file. Copy the <unzipped>\webtop\MDCFilter-config.xml file to the <webtop>\WEB-INF\classes folder.

Step 4: Edit the web.xml file in the \<webtop>\WEB-INF folder and declare the MDCServletFilter filter and the corresponding filter-mapping by adding the following section just before the WDKController filter-mapping definition.

<filter>
  <filter-name>MDCFilter</filter-name>
  <description>Add MDC contexts dynamically</description>
  <filter-class>com.wordpress.stephanemarcon.filters.mdc.MDCFilter</filter-class>
  <init-param>
    <param-name>config-file</param-name>
    <param-value>MDCFilter-config.xml</param-value>
  </init-param>
</filter>

<filter-mapping>
  <filter-name>MDCFilter</filter-name>
  <url-pattern>/*</url-pattern>
  <dispatcher>REQUEST</dispatcher>
</filter-mapping>

Step 5: Download the lfsappenders-x.x.x.jar from the logfaces download page and copy it to your <webtop>\WEB-INF\lib folder.

Step 6: Edit the <webtop>\WEB-INF\classes\log4j.properties file and add the following section to it.  Set the  log4j.appender.LFS.remotehost property to the name of the machine running the logfaces server (which is the logfaces client in our case). For example, if you are running your webtop and your logfaces client on the same machine, you may use  log4j.appender.LFS.remotehost=127.0.0.1

# Custom appenders
log4j.logger.com.wordpress=INFO, LFS

#LogFaces appender
log4j.appender.LFS=com.moonlit.logfaces.appenders.AsyncSocketAppender
log4j.appender.LFS.application = DEMO-webtop
log4j.appender.LFS.remoteHost = <logfaces server url>
log4j.appender.LFS.port = 55200
log4j.appender.LFS.locationInfo = true
log4j.appender.LFS.threshold = ALL
log4j.appender.LFS.reconnectionDelay = 5000
log4j.appender.LFS.offerTimeout = 0
log4j.appender.LFS.queueSize = 1000
log4j.appender.LFS.backupFile = D\:/Temp/lfs-backup.log

OK, you can now start your webtop application and go to the login page. You should now be able to see a “DEMO-webtop” domain on the right-hand side of  the logfaces client:

logfaces_DEMO_webtop_domain

Right-click on this “DEMO-webtop” domain and click on “Create TRACE perspective”. As a small reminder, the log4j levels are the following: ALL > TRACE > DEBUG > INFO > WARN > ERROR > FATAL > OFF. Starting a “TRACE” view will enable the display of all logs forwarded to logfaces, and therefore the logs produced by the MDCFilter (which, by the way, are INFO-level logs).

logfaces_DEMO_webtop_standard_logs

This is pretty cool but not enough as the columns displayed only give the out-of-the-box information log4j events contain. Our MDCFilter adds more information to those. In order to display this additional information, the corresponding columns must be enabled. Open the File -> Preferences -> MDC Names menu and add the following MDC column names: Uri, ExecTime, UserName, Action, Arguments.

logfaces_DEMO_webtop_MDC_names

Now if you enable the display of those columns by clicking on the “Columns” button in the logs frame and selecting those in the Diagnostic context , you should see more contextualized logs:

logfaces_DEMO_webtop_MDC_columns

Every time an action is performed, you should now see a lot more info, and actually enough information to tell what the user is performing as an action.

full_mdc

If you have a quick look at the MDCFilter configuration, you will understand the only thing it does is generating “INFO” level logs with information contained in HTTP session attributes, HTTP request attributes and parameters. That is actually all, nothing more. No logfaces-specific code here, we could have actually logged this information to a local log file but forwarding those to the logfaces interface makes this information much more usable.

If you use either the DfLogger or a basic log4j logger is your own code, you can also forward those to logfaces, to ease your development for example, using the same appender or another one.

log4j.logger.your.own.package=DEBUG, LFS

We are here focusing on webtop transactions logging but you are certainly currenty understanding you may add configurations to forward all webtop error-level logs to logfaces which can be of great help.

JMS instrumentation using the custom servlet filter:

OK, we are half way to succeeding in our challenge, :-). Let’s instrument the Java Method Server so that it will forward info about what is happening on it. You will notice those steps are very similar to the ones we used for instrumenting webtop.


These steps are valid for Documentum 6.X environments but the actual instrumentation should be the same on both older and newer versions. Nevertheless, it is difficult to test those on all Documentum versions and I unfortunately do not have the time to generate a decent installer for those.


Step 0: Connect to your Content Server Step 1: Stop your JMS

Step 2: Download the lfsappenders-x.x.x.jar from the logfaces download page and:

    • copy it to your <documentum>\jbossX.X.X\server\DctmServer_MethodServer\lib folder.
    • copy it to your <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\lib folder

Note that this will make it available for further use by other components like acs/bocs

Step 3: Replace the log4j jar contained in the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\lib with the 1.2.17 one in the <unzipped> folder.

Step 4: Edit the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\APP-INF\classes\log4j.properties file and add the following section to it. Same thing as for webtop, set the log4j.appender.LFS.remotehost property to the name of the machine running the logfaces server.

# Custom appenders
log4j.logger.com.wordpress=INFO, LFS

#LogFaces appender
log4j.appender.LFS=com.moonlit.logfaces.appenders.AsyncSocketAppender
log4j.appender.LFS.application = DEMO-JMS
log4j.appender.LFS.remoteHost = <xxxx logfaces server url xxxx>
log4j.appender.LFS.port = 55200
log4j.appender.LFS.locationInfo = true
log4j.appender.LFS.threshold = ALL
log4j.appender.LFS.reconnectionDelay = 5000
log4j.appender.LFS.offerTimeout = 0
log4j.appender.LFS.queueSize = 1000
log4j.appender.LFS.backupFile = D\:/Temp/lfs-backup.log

Step 5: Copy the <unzipped>\mdcservletfilter.jar and <unzipped>\castor-1.1-xml.jar  files to the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\lib folder. Step 6: The servlet filter uses a configuration file. Copy the <unzipped>\jms\MDCFilter-config.xml file to the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF folder. Step 7: Edit the web.xml file in the <documentum>\jbossX.X.X\server\DctmServer_MethodServer\deploy\ServerApps.ear\DmMethods.war\WEB-INF folder and declare the MDCServletFilter filter and the corresponding filter-config by adding the following section just before the DoMethod servlet definition. !!! Be careful to specify an absolute path to the MDFFilter-config.xml file !!!!

Example path: D:/Documentum/jboss5.1.0/server/DctmServer_MethodServer/deploy/ServerApps.ear/DmMethods.war/WEB-INF/MDCFilter-config.xml

<filter>
  <filter-name>MDCFilter</filter-name>
  <description>Add MDC contexts dynamically</description>
  <filter-class>com.wordpress.stephanemarcon.filters.mdc.MDCFilter</filter-class>
  <init-param>
    <param-name>config-file</param-name>
    <param-value><em>absolute_path_to_MDCFilter-config.xml</em></param-value>
  </init-param>
</filter>

<filter-mapping>
  <filter-name>MDCFilter</filter-name>
  <url-pattern>/*</url-pattern>
</filter-mapping>

Step 8: You can now start your JMS server.

If you followed all the above steps, you should now be able to have one line for each call to the Java Method Server, so one line to each Java Method execution.

Documentum Java Method Server Monitoring with logfaces

NOW SOME REAL-TIME FULL-COLOR CONTEXTUALIZED WEBTOP TRANSACTIONS MONITORING

Yes, I know… I’m not good at finding teasing headings…

We start having something that starts to be interesting at this point. The servlet filter we added on Webtop sends its INFO level messages to our logfaces real-time view, giving some interesting information about all relevant requests handled by our  application. On the other side, you should now also be able to see logs forwarded by the servlet filter we deployed to our JMS server which is actually extracting contextual information from the requests sent to the DoMethod Servlet (which, as you know it, is the actual engine of the JMS Server).

Let’s see how we can tune the logfaces viewer for real-time transaction monitoring. Logfaces has a very interesting functionality called “tags“. It permits the visual tagging of log events in the interface based on user-defined criteria.If you followed all the steps of webtop instrumentation, you will be able to define for example a tag for the checkout operation in the File -> Preferences -> Tags.

Documentum Webtop Logfaces checkout tag

And you could be able to end up with this sort of colorful usage of logfaces “tags” (if you are interested in getting those exact tags definition which I made for this tutorial, please post the request and I will add those somewhere).

CONCLUSION of part-2

As I was explaining in my first post, there are two cases where this logs centralization solution may be used:

  • as a tool for a pure monitoring a Documentum platform and the aim of this Part-2 was focusing on a giving a real life introduction to this use case. Depending on the feedback/comments I will get about Part 2, I will write a third dedicated post which will give further details about howto instrument the other most commonly used Documentum components (ACS, Content server processes, docbrokers, CTS and indexing server). Logfaces Syslog compatibility / email alerting / built-in reporting may also be used to support such a use case.
  • as a tool to ease the day-to-day work of developers. There would be a lot more features to present on the logfaces solution itself to support this use case (logs-to-file export,  source code linking,…). Depending on the feedback/comments I will get on this post, I may write dedicated posts to describe those.
  • as a tool to share logs with other departments or teams, in case your Documentum platform provides services to other systems. Logfaces has a nice functionality which permits the definition of security over log events. Imagine you provide services to other systems (using DFS for example), you may for example decide to give access to a part of the corresponding logs to the calling systems’ support teams. In many cases, giving such a direct access to logs may save you a lot of time, helping each side of the interface quickly identify where a potential problem sits.

A full reading of the logfaces manual would not have helped me fit into the 30 minutes challenge but we are now reaching the end of the post so you may refer to it to get a deeper understanding of logfaces. I hope my post helped you understand the tool but there is nothing better than a real good old manual.

When it comes to such large-range solutions, it is quite difficult to write helpful detailed material on those. I hope the aim of this post is reached and it gave you a way to investigate on whether logfaces can or not be a candidate for your environment. I have always been a big fan of tools which have great usability/cost  and quality/cost ratios. Logfaces is definitely one of those and fully justifies its (actually very low) price, even if you only consider its single benefit to development productivity. We all use interesting tools/technologies we spent time to experience with, select and integrate into our architecture, ideally at the lowest cost. My aim is to post other “reviews” of tools which I would strongly recommend, hopping this can be of any help for others.

Advertisements

7 responses to “Documentum Monitoring – platform logs’ centralization using Logfaces – Part 2

  1. Does it have trial version
    available?

    Like

  2. Pingback: Documentum platform logs’ centralization using Logfaces – Part 1 | Stephane Marcon´s Documentum Blog

  3. Hi Stephane,

    In your choice of log server, did you consider to use Graylog2 ? I’m currently looking for log server for one of my customer and I’m considering to try to use Graylog2. Did you make comparison between Graylog2 and Logfaces ?

    Like

    • Hi Pierre,

      Be careful, quite a long answer, but not sure whether it will be helpful though :-).

      As you know (I must give some more details for other readers), there are indeed a lot of logs centralization solutions available on the market, open-source or not: from more system-related log servers(Fluentd/Scribe, Flume, Kafka, SyslogNg,…), to more recent ones like Splunk, Logstash, Graylog2, and other “logging as service” ones like loggly, papertrail. A lot of solutions with different back-end choices (ElasticSearch/Hadoop/MongoDB/…), maybe actually too many solutions. The logging thing is quite trendy, and on one hand benefits from “big data” solutions but on the other hand, still suffers from the non-harmonization of those…

      Actually I think Splunk, Graylog2, Logstash, ElasticSearch and Kibana are the stacks which generally seem to currently stand out. There is certainly no bad solution among those, and I will certainly not say Logfaces is better, I just know it works well in my case. I did not do a detailed comparison between GrayLog2 and Logfaces, so I cannot give you figures on such a comparison. What I can do is give details about how two major criteria lead my own study.

      Logs forwarding performance:
      —————————-
      – most of the components of the Documentum platform use Log4j so I had to find a log4j-compliant appender. The logging solution was to have one ready as I did not want to neither code nor test one

      – the logging mechanism should have nearly no impact on the monitored systems. So, for me this meant it had to be either lightening fast, in practical, this meant asynchronous. I wanted it to be compatible with a full DEBUG-level logging of a small to medium amount of custom code in production, which meant the appender had to be able to handle something like 3.000 events per second, according to my calculations. Actually performance was the big thing, whatever the protocol used, as most of the existing solutions support the same protocols. I just did a small refresh to get some figures and I currently persist 1’700’000 events per day. In general, I could be wrong but, whatever the solution, I’m not worried at all neither about the message ingestion nor about the message persisting but actually much more by the forwarders and their potential impact on applications. In short, my logging could die but my application shouldn’t :-).

      When I looked at the different log4j appenders the different solutions were providing, the number of real asynchronous appenders was really limited. I found some diverse opinions about the default Log4J AsyncAppender (by the way only available if a log4j XML configuration is used), particularly, I found some information about this appender going back to a synchronous mechanism when a RuntimeException is thrown by the proxied appender. Which actually gave me some doubts.I also wasn’t very happy with changing from a log4j plain properties file to an XML one, mainly because I did not know whether this would disturb the DfLogger logger.

      Actually Log4j2 is the first version where a real asynchronous logger is available. It seems it dramatically increased performance. But OK, this is when Log4J 2 is a choice.

      My (I think legitimate) assumption was that it should be actually much faster to plug an asynchronous appender to the log4j logger than having logs written into a file and then triggering any agent. But I did not make any performance comparison for this. It was when I looking for such a native asynchronous log4j appender that I found the logfaces product. Logfaces appender does not rely on the AsynchronousAppender class and builds on a custom queuing/buffering/fallback mechanism. I tested the appender and was happy with its performance and stability.

      Now if we come back to GrayLog2, I don’t know how you see the events forwarding will be done. I know Graylog2 uses its GELF and the org.graylog2.log.GelfAppender can be used as a log4j(1) appender. I’m just wondering whether this appender is synchronous or not. I presume as you have the choice to go for UDP, the author did not make any custom buffer mechanism, but I’m not sure at all.

      Logs viewer interface:
      ———————-
      I actually started my study with Splunk and I must say I was little bit disappointed with its web interface and the leaning curve it requires for its search commands. It could be I’m a little old-school but I’m not a fan of web interfaces for deep-dive data mining. After I tested Splunk, when I discovered the logfaces UI, it was a real relief.The interface is very fast, intuitive, you need less than 15 minutes to understand how it works. It actually looked more like a monitoring cockpit than a search interface to me, which is exactly what I wanted to have. Maybe I do not know other interfaces enough (Kibana/Graylog2) but my point is that I did not needed anything more or less.

      As a heavy-client (and not a web client) it also directly benefits from a direct access to the machine it runs on. For users this means: a one-click export to any File editor, (if enabled) a one-click access the exact line of source code which generated the log event. In short, as I do not want to sound like I’m selling something, I’d recommend trying it, really.

      In short, all solutions (not speaking here about forwarders) have nice back-end engines, the very distinctive criteria is the final user interface. Depending on your target audience, Kibana could be a nice choice. I’m just wondering on what you were thinking about as a solution:

      – (1) having logs ingested by logstash, persisted to ElasticSearch and then accessed using GrayLog2-web
      – (2) or having logs ingested by logstash, forwarded as gelf to Graylog2, persisted to ElasticSearch and accessed using Kibana
      – (3) or having logs forwarded to graylog2 using a log4j appender, persisted to ElasticSearch and then accessed using Graylog2-web or Kibana
      – (4) or something else?

      Personally, I would recommend you to have your customer test the interfaces. I’m not an expert of those tools but I think that if you go for solution 3, you will be able to test both Graylog2-Web and Kibana. For the logserver itself, I must say I was quite impressed with logstash and, as I explained in Part 1, if I had much more time to setup a solution, I would go for the ELK thing (ElasticSearch / LogStash / Kibana). Now, I personally consider that, as for Splunk, if you as an ECM consultant have to setup a solution like this, being responsible for it is quite “dangerous”, as it should actually be considered as a company-wide service, with people configuring and running it. If such a tool is not available at company-level, I would certainly be careful with the cost of ownership and the actual ownership identification of such a platform and would pragmatically go for the less intrusive and most intuitive solution. Which is what I personally did But OK, first, ideally have your final users test the interface, this is the best recommendation I can come with.

      Like

  4. Pingback: Power Pivot | Documentum in a (nuts)HELL

  5. Pingback: Documentum Monitoring – platform logs’ centralization using Logfaces – Part 3 | Stephane Marcon´s Documentum Blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s