Starting MongoDB on CentOS with NUMA disabled

August 1st, 2012

This may not be the best way to do this but it works for me. I got fed up with seeing the following message in the logs every time MongoDB was restarted.*

Wed Aug  1 12:06:39 [initandlisten] ** WARNING: You are running on a NUMA machine.
Wed Aug  1 12:06:39 [initandlisten] **          We suggest launching mongod like this to avoid performance problems:
Wed Aug  1 12:06:39 [initandlisten] **              numactl –interleave=all mongod [other options]

I’m using a pretty boilerplate init.d script for mongo so I figured it would be simple to update the start command to use numactl. What I discovered is that my init script uses a builtin bash function called daemon to start the mongo process. daemon allows for a –user option. Unfortunately, numactl does not. Neither is it possible to execute a bash function using numactl. “TO THE GOOGLES!”

Hmm, everything I see recommends wrapping the numactl command around another command called start-stop-daemon. OK, but CentOS doesn’t have a start-stop-daemon command. Argh.

Finally I resorted to digging into the daemon function to see what it was doing and came up with this:

numactl --interleave=all runuser -s /bin/bash $MONGO_USER -c "$mongod $MONGO_OPTS"

It works. Moving on now.

Exploring Astronomy Dataset Links with GridWorks

May 27th, 2010

At ADS we are looking at new ways to index and provide full text searching for the Astronomy and Physics literature we manage to obtain, either through scanning + OCR of historical content, or from digital material provided by some publishers. Two options we’re looking at are Apache Solr and CDS-Invenio. But that’s not what this post is about.

While parsing and indexing a pile of about 42k articles from the past dozen or so years of the ApJ, AJ, ApJL and ApJS, formatted in the NLM XML schema, I noticed that many of the articles contained external links to various things, most interestingly, astronomical datasets.* My first thought was, “hmm, I wonder what’s at the other end of all those links…,” followed closely by, “hey, crawling those links would make a nice dataset to load into that nifty new Freebase Gridworks tool I heard about the other day.” So that’s what I did.

Out of 13652 articles there were 33600 total links which fell into three categories: http urls (28555), dataset links (938) and supplement links (4107). Dataset links consist of an identifier that looks something like ADS/Sa.CXO#obs/927. To get the goods you have to feed that id to a resolver which, assuming a valid identifier, will redirect you to the real location of the dataset. Supplement links took a bit more head-scratching as their values consisted of just a relative file name, like datafile3.txt or 69491.figures.html. We figured out that the solution was to append the filename to the publisher’s URL for the article, e.g., article and dataset or article and figures.

The ultimate objective was to load the results of crawling these links into Gridworks, but that means getting the data into csv or tsv form. Rather than have the crawl script output straight to csv, I stash the results in a MongoDB instance. Here’s an example of one of the resulting json documents in Mongo:

{u'_id': ObjectId('4bfc3737a1f714263b000012'),
 u'anchor_text': u'',
 u'bibcode': u'2001ApJ...556L..83F',
 u'content': u'<HTML>\n<HEAD>\n<TITLE>Duncan A. Forbes, Swinburne University, Globular Clusters</TITLE>\n</HEAD>\n\n<h1> Globular Cluster Research</h1>\n\nI am interested in various aspects of Extragalactic Globular\n    Cluster research. In particular the formation and evolution\n    of Globular Cluster Systems and their host galaxies. \n<br>\n\n<UL>\n<A HREF="colours.html">GLOBULAR CLUSTER PHOTOMETRY DATABASE</A>\n</UL>\n\n<UL>\n<A HREF="spectra.html">GLOBULAR CLUSTER SPECTRAL DATABASE</A>\n</UL>\n\n\n<UL>\n<A HREF="review.html">GLOBULAR CLUSTER REVIEW PAPERS</A>\n</UL>\n\n\n<UL>\n<A HREF=""> SAGES PROJECT</A>\n</UL>\n\n<UL>\n<A\n\t  HREF=""> HARRIS DATABASE</A>\n</UL>\n\n\n\n<tr><td><hr noshade></td></tr>\n\n </BODY>\n',
 u'context': u'<p>The combined sample data are available at <ext-link ext-link-type="uri" xlink:href=""></ext-link>. </p>\n',
 u'doi': u'10.1086/323006',
 u'ft_source': u'/proj/ads/articles/sources/AAS/ApJL/2001/556/2/323006/323006.xml',
 u'link_id': u'',
 u'link_type': u'UrlLink',
 u'response': {u'accept-ranges': u'bytes',
               u'content-length': u'781',
               u'content-location': u'',
               u'content-type': u'text/html; charset=UTF-8',
               u'date': u'Tue, 25 May 2010 10:14:07 GMT',
               u'server': u'Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5',
               u'status': u'200'},
 u'solr_id': u'31908',
 u'url': u'',
 u'xpath': u'/html/article/body/sec[5]/fn-group/fn/p/ext-link'}

From there it was easy to dump what I needed to csv and load into Gridworks. I’m not going to get into how totally awesome the Gridworks software is, except to say you should watch the demo videos.

I can’t post the entire Gridworks project, but here’s some screencaps, a column list and some of the more interesting facets.

Initial data load plus some derived columns

Column list:

  • Id of the MongoDB doc
  • Id of the solr doc
  • ADS bibcode identifier of the article
  • Publication year – derived from the bibcode
  • DOI
  • xpath expression of the <ext-link> element
  • parent tag – the containing element type
  • link context – the containing element’s serialized xml contents
  • link type – one of url, dataset or supplement
  • anchor text – the text contents of the <ext-link>
  • full text source file
  • journal
  • full text source – publisher
  • extlink id – either the url or the dataset id or the supplement filename
  • domain – derived from the url
  • status – http status returned when requesting the resource
  • content-type – content-type header returned in the response
  • mimetype – derived from the content-type response header
  • location – the final url of the resource following any redirects
  • content length
  • response headers – list of all the header attribute names return in the response (just to see what other interesting stuff might be there)

Still to be determined how many of the url links point to some kind of data

Knowing the container could help parsing out something about the semantics of the link

~70% 200's was more than I expected. Of course 200 doesn't mean it actually found something interesting.

would have hoped for fewer text/html

All the hits look like observation reports, like this one, which I think is a good thing

Finally a thanks to Sean Hannan who worked out a hack to a bit of the Gridworks javascript that automatically turns any cell values beginning with “http://” or “https://” into active links. The nice thing about that was it let me turn the column containing the MongoDB id into a link to a little script that dumps a JSON representation of the document.

* NLM allows for links to external resources using either <ext-link> or <supplementary-material> elements.

Embedding citation metadata in the ADS HTML

March 1st, 2010

Here’s what I know: you can embed a set of <meta/> tags containing citation metadata in your HTML to help Google Scholar to index your content. We’ve been doing it at ADS for quite a while. I’m not certain if the impetus came directly from Google, or, more likely, we got the idea from a CrossTech blog post by Tony Hammond that describes the technique.

For example, if you execute  curl -s | grep meta you should see:

<meta name="citation_language" content="en" />
<meta name="citation_doi" content="10.1016/0550-3213(77)90384-4" />
<meta name="citation_abstract_html_url" content="" />
<meta name="citation_title" content="Asymptotic freedom in parton language" />
<meta name="citation_authors" content="Altarelli, G.; Parisi, G." />
<meta name="citation_issn" content="0550-3213" />
<meta name="citation_date" content="08/1977" />
<meta name="citation_journal_title" content="Nuclear Physics B" />
<meta name="citation_volume" content="126" />
<meta name="citation_firstpage" content="298" />
<meta name="citation_lastpage" content="318" />

Since first implementation we’ve had some back-and-forth with Abhishek Jain at Google Scholar to ensure we’re making use of the full set of fields that Google Scholar looks for.*

Dan Chudnov, David Bucknum & Ed Summers at the LoC recently expressed interest in also embedding these tags. In the absence of official reference from the Google Scholar folks, I figured it would be a good thing to post here.

  • citation_language
  • citation_doi
  • citation_abstract_html_url
  • citation_title
  • citation_authors
  • citation_issn
  • citation_date
  • citation_journal_title
  • citation_volume
  • citation_firstpage
  • citation_lastpage
  • citation_publisher
  • citation_issue
  • citation_pdf_url
  • citation_pmid
  • citation_keywords (multiple instances OK)
  • citation_conference
  • citation_dissertation_name
  • citation_dissertation_institution
  • citation_patent_number
  • citation_patent_country
  • citation_technical_report_number
  • citation_technical_report_institution

I had to cull this list via a visual scan of a long, forwarded e-mail thread. So, like I tried to insinuate above, it sure would be great if Google Scholar would publish an official reference to this schema somewhere.

* all instances of the term “we” should really be read as “my boss, Alberto”.

Contextual Inquiry on the Cheap

December 31st, 2009

I thought I’d share the interview outline I’ve been using to conduct some low effort contextual inquiry sessions with ADS users.

thumbnail links to google doc

Classic contextual inquiry, in which the researcher sits with or shadows a person in the context of the subject’s own working environment, is often conducted in 3+ hour sessions, frequently with all manner of video capturing equipment. My goal is cut that time down to 30 minutes, partly because this whole user research thing is supposed to be a part-time endeavor, and also because the majority of ADS users are PhD’s, and we all know just how valuable their time is.

So far I’ve only managed to conduct four of these interviews (with two more scheduled). Would love to get a total of 10. Since I don’t have access to video equipment I simply mash out typewritten, poorly spelled notes as fast I can. The notes have a stream of consciousness flavor, but the early indications are that the information gathered will be valuable.

Example notes:

refers to bibcode as "indexing thing". "not any use to me."
wrote a perl script that rewrites the bibcode into something understandabl
other strategies for searching for particular star: entering star name into abstract search or title search.
finds one article using abstract search.
mentions that he doesn't know boolean sytnax by memory
to find more tries going to simbad and finds alternate names for the star

Mad Anachronisms

August 19th, 2009

Like the rest of the planet it seems, I’ve been consumed lately by the show Mad Men. Jennifer and I are still catching up via Netflix. It really is one of those truly great and remarkable shows that comes along too rarely.

[Warning: Insignificant spoiler in the next couple of sentences]

About midway through the second season we came to the somewhat infamous picnic scene.  The Draper’s have taken the new Cadillac for a spin in the countryside. They relax and recline on a blanket. The grass is green, the breeze is mild, they talk about how rich they are. All is good. Then it’s time to pack it up and head home. We see Don stand up, stretch, smile, and chuck his empty beer can into the idyllic landscape as if he was tossing a baseball to his son. Betty pinches two corners of the  blanket and gives it a lift & shake, distributing the paper plates, napkins and other picnic detritus across the grass. The trash begins to lightly flutter and drift down the slope. In 1963 the happy family piles into the car and motors away. Meanwhile, in 2009, we sit on the couch, jaws agape at this stunning spectacle of thoughtless littering.

It’s suprisingly shocking. There’s the shock of seeing it, and then there’s the shock at being so shocked. Every bone in your body wants to be repulsed, but the relativist mindset makes it difficult to fault the characters. As the writer points out in the DVD commentary, Iron Eyes Cody didn’t come along until 1971.

The scene also in a way briefly cracks open that narrative fourth wall in that it’s clear the writer/director is blatantly highlighting these banal actions to serve up a very in-your-face cultural anachronism. It’s only been 40 years, but wow have the dominant cultural attitudes about the environment changed.

We’ve since watched a few more episodes, but that scene still sticks with me, and lately it’s got me imagining someone sitting on their couch in 2049–or hovering in their Anti-Grav Lounger, or whatever–and passing judgment on our present day actions. It’s interesting to think what might be the contemporary equivalents of folks in the early 60s treating the planet like giant trash receptacle.

I’m guessing they’ll look back in horror at us actually throwing things–anything!–away in a garbage can rather than somehow recycling or composting.

You mean the water from the shower just drains away into the sewer?!?

They have apples in a Boston supermarket that were grown in New Zealand? Insanity!

I mean, can you imagine!?

IRC Blocked? Create an SSH tunnel with PuTTY

June 12th, 2009

Can’t get to IRC because port 6667 is blocked on your local network? Here are some instructions for how to create a SSH tunnel using  PuTTY, and then connect to freenode (or any other IRC server) with Pidgin using the tunnel as a SOCKS5 proxy. You can most likely s/Pidgin/your IRC client of choice/, but the screenshots below will show the Pidgin config dialogs. These instructions also assume that SSH, port 22, is not also blocked. Woe be to you if that is the case.

My original source for how to do this was this post, which describes the same trick but for FireFox.

Step 1: create a new PuTTY session configuration. In this case I using the login and calling the session irc-7777. I usually name the session based on the local port number I’m going to forward.

create and save a new PuTTY session

create and save a new PuTTY session

Step 2: go to the Connection -> SSH -> Tunnels node of the session config. In the Source port field enter “7777″ (or some other port number). In the radio button section below that select Dynamic and Auto. Click the Add button. You should see “D7777″ appear in the list of forwarded ports.

Configure the ssh tunnel

Configure the ssh tunnel

Tunnel D7777 appears in the list

Tunnel D7777 appears in the list

Step 3: go back to the man session config node and save the session again. Then open the PuTTY session by clicking the Open button. A normal looking PuTTY terminal window should open. This session is your tunnel so you should probably leave it be. i.e., don’t use it for doing stuff in the shell and as a tunnel (although I don’t really know what consequences that would lead to). If the fact that this tunnel takes up space in your TaskBar (it does me) check out PuTTY-Tray.

You've probably never seen one of these before

You've probably never seen one of these before

Step 4: Configure your Pidgin IRC account to use the tunnel as a SOCKS5 proxy. Go to Accounts -> Manage Accounts. Highlight your IRC protocol account and click Modify (or create one by clicking Add). Go to the Advanced tab of the config dialog. In the Proxy Options section select SOCKS5 as the proxy type, enter “localhost” as the Host and “7777″ (or whatever port you used) as the Port.

Specify your local tunnel as the SOCKS5 proxy

Specify your local tunnel as the SOCKS5 proxy

Save and that’s it. You should be able to connect to IRC server through Pidgin.

On a *nix machine (or using Cygwin, I suppose) this is, of course, much simpler. You can replace the PuTTY steps with a single ssh command: ssh -D localhost:7777

gtfo: get the foaf out

April 23rd, 2009

Leading up to the Linked Data pre-conf at cod4lib09 there were several irc discussions around just how to structure the day and what could we do to give attendees the best shot at having an “ah-ha moment”. One idea was to create a simple application that would demonstrate the potential of linked data while also being participatory. I think it took Ed Summers less than 24 hours to hack together the first iteration of the cod4lib2009 attendees foaf crawler/gallery thingy. He can usually be counted on for such feats of overnight engineering.

Read more »

Basic Block Data Decomposition in Perl

March 3rd, 2009

I was playing around with the idea of parallelizing something the other day to eke out some performance. Unfortunately, I’ve gotten a bit rusty since writing some MPI code for a parallel computing course a few years back. I got stuck on what should be the simple part of dividing up my input across the threads.

Read more »

Code4LibCon 2009: Timeline and IRC log

February 28th, 2009

To cut to the chase, I extracted the hCal events from the 2009 conference schedule and fed them into a Simile Timeline. I then linked each event to the corresponding slice of my IRC client log. If you want to take a look it’s here.

Read more »

My “unified” twitter + client

January 13th, 2009

I’ve been on the lookout for a client that would somehow unify my streams from both Twitter & There’s a few clients that will support one-or-the-other (sometimes via hacking the source) but I’ve found none that scratch my itch of have both services presented in the same window. My taskbar is precious real-estate.

These instructions reflect my setup on an Ubuntu vhost which I ssh into via PuTTY.


  • Twitter account
  • account
  • TTYtter, a perl command-line client for accessing Twitter-compatible APIs
  • Gnu Screen

Step 1, download TTYtter, make it executable and put it somewhere on your $PATH.

Step 2, create a file at ~/.ttytterrc1 with the following contents, including your login:


Step 3, create another file at ~/.ttytterc2 with just your twitter login:


Step 4, create a dedicated screen session with the command screen -S ttytter

Step 5, split the screen horizontally using the screen command Ctrl+a S

Step 6, start your session in the top window with the command -rc=1

Step 7, switch to the bottom pane with the screen command Ctrl+a TAB

Step 8, create a new window in the bottom pane with the screen command Ctrl+a c

Step 9, start the Twitter session in the bottom pane with the command -rc=2

Example screenshot

Example screenshot

Only set this up 2 hours ago but already I can tell this is going to work for me long term. Beats having to do hard resets due to some combination of Twhirl and/or my video drivers. One improvement I’d like to investigate is some kind of color highlighting of the <username>’s to improve readability.

Update: January 13, 2009 at 11:30

So I quickly realized that the above setup has a major malfunction, namely that it doesn’t allow preserving of the split window so it’s not possible to detach and reattach to the ttytter screen session. So there’s an extra trick necessary to do this:

Step 3.5, create an “outer” screen session that wraps the ttytter session with the command screen -e^Ee -S outer.

The -e^Ee option binds the escape key for that session to Ctrl+e instead of the default Ctrl+a. This is a common trick for embedding screen sessions within sessions. With this extra step I can now detach from the outer session using Ctrl+e d and reattach with the inner split-screen session presevered using screen -r outer.

Also, I learned that to colorize the output you can use the -ansi option to ttytter.