Friday, January 27, 2017

Rock Band Ion Drums Replacement Nut Size

If you have an Ion Drum Rocker that has seen serious use, the nuts in your plastic wingnuts that tighten the clamps for pads and cymbals are probably worn. On my 2008 PS3 Ion Drum Rocker some of the nuts were worn to the point that they wouldn't tighten all the way. The correct size for replacement is:

6mm x 1.00, or M6 x 1.00.

If you live near a Lowes, this is the item: https://www.lowes.com/pd/The-Hillman-Group-5-Count-6mm-Zinc-Plated-Metric-Hex-Nuts/3012716.

Now go forth and rock.

Sunday, November 1, 2015

All My Yak Shaves #1:One SSSD Config for RHEL 5-7

TL;DR

Even though Kerberos in CentOS/RHEL 5 (1.6.x) supports KEYRING credential cache, SSSD requires a function from Kerberos 1.10.x to use it.

Why SSSD?

I had been meaning to refactor our Puppet authconfig management for a while. (Authconfig is the recommended utility to configure how user information lookups and authentications are performed on RedHat-based Linux distributions.) Until recently we had been using a mix of nss_ldap, nss-pam-ldapd (nslcd) and sssd packages on our (RedHat Enterprise Linux clone) Oracle Enterprise Linux  5-7 hosts. We're lucky in that we generally only have to support RHEL-based Linux, but simultaneously supporting 3 versions can be tricky. My goal was to standardize our authconfig settings as much as possible while still supporting:
  1. nested group membership
  2. Kerberos authentication and password changes
for our POSIX users and groups in Active Directory. RedHat's documentation recommends SSSD for this task, and since it exists in versions 5, 6, and 7 I hoped the goals could be achieved.

Version Issues Discovered

RHEL 5 ships with SSSD 1.5.1. This version technically supports nested groups, but querying recursive group membership for a user forces SSSD to also get all members of those groups, which was frustrating  sudo users who would have to wait about 5 seconds for the system to check if they were in a sudo group. A coworker discovered this issue was addressed in the 1.9.x version of SSSD with the addition of the "ignore_group_members" configuration parameter.

Another issue was support for Kerberos KEYRING credential caches in SSSD. I discovered the hard way that pam_krb5 will block logins if your disk is full or it cannot create the default Kerberos credential cache (/tmp/krb5cc_%{uidNumber}). I wanted to avoid that with SSSD, and using the Linux kernel's keyring functionality seemed like a good way to avoid the full disk issue. Unfortunately, while the the version of Kerberos that ships with RHEL 5 (1.6.1) supports KEYRING credential caches (even if the default credential cache is not configurable in /etc/krb5.conf), SSSD support for KEYRING was not added until SSSD 1.10.x. That last bit was the hard part - so much of SSSD's Kerberos implementation seems to simply rely on the system Kerberos libraries, but it turns out that SSSD's KEYRING implementation requires krb5_cc_get_full_name(), which was first introduced in Kerberos 1.10.x, which is only available in RHEL 6 and newer.

Version Issues Resolved-ish

I discovered a RHEL 5 backport of SSSD 1.9.x that ended up working out well for us. It gives us "ignore_group_members" which makes sudo fast. It doesn't give us KEYRING on RHEL 5, but given the prospect of trying to compile and statically link newer Kerberos libraries into a newer, hand-compiled version of SSSD, I chose to accept that older servers might not allow remote users' authentications if the disk was full.

Tuesday, August 5, 2014

Why Did I Just Buy A 16-Year Old Car?

Nostalgia.


Like many adults, I yearn for a car from my childhood. That car is my father's GC Mazda 626 GT sedan. The thing I loved most about the car was its sleeper status; its subdued styling and straight lines easily masked the 2.0L turbocharged beast underneath. It is not fast by today's standards, cranking out only 120 horsepower and 150 pound-feet of torque. But especially on days we were late for church, my father made it fast.

Until a catastrophic timing belt failure hit my daily driver, I was generally uneducated and uninterested in cars. I bought a Honda Civic on reputation and fuel efficiency alone and always paid for oil changes and whatever service my arbitrarily chosen mechanic told me I needed. When I (perhaps foolishly) decided to repair the timing belt failure damage, my sticker shock at the repair bill motivated me to begin educating myself about auto systems and repair. I have always been an iterative learner: try, fail, try again with gained knowledge. I decided I wanted a manual transmission car on which I would teach myself auto repair.

As you can imagine, it's almost impossible to find one of those Mazdas for sale here in the salt belt as they are about 30 years old at this point. They do not have much of an enthusiast following keeping them alive, either. Discouraged, I started looking for cars with a similar "sleeper" quality. Driving a friend's late-model Acura TSX was a revelation that some luxury cars were more than just coddling technology and actually were high performance machines. Perhaps because of that, most of my early searches focused Hondas, Acuras, and Mazdas. Then a friend suggested I look at older BMWs.

I was not sure if I could maintain any BMW, even an older model. All I knew about BMW at that point was the German performance car stereotype of high maintenance costs. Then I discovered the community of seemingly normal people that managed to maintain their own older BMWs and talk about them on internet forums and the myriad of vendors still selling parts for those old models. The community seemed like the same kind of normal people that run Mac OS on PC hardware: small but dedicated group of people willing to go to great lengths to figure things out and share that knowledge because they are passionate about flexibility and craftsmanship.

I would've preferred a BMW from the late 80's or early 90's, but they have become quite collectible and expensive at this point. Many people suggested looking somewhere around the late 90's E36 3-series or E39 5-series. For sleeper status, a 5-series probably would've made more sense. I knew that if I was going to spend thousands of dollars to buy one then I wanted a powerful one. The M5 was very attractive, but had terrible gas mileage. I also wasn't a huge fan of all the now-outdated technology it provided; I wanted something as simple as my no-frills Honda Civic. The E36 M3 got about 27 mpg on the highway and kept the luxury comforts to as much of a minimum as they could be in the late 90s, so I started looking for one of those.

Of course I didn't want just any E36 M3. I wanted a sedan with manual transmission and cruise control. This was a difficult combination to find. The sedan M3 was only available in 1997 and 1998, and not many remain with both stick shift and the optional cruise control. Eventually I found one that looked good in Florida. It wasn't perfect, but it was a southern car for a good price. Looking back, I realize I would've had an easier time with a slightly more expensive and better maintained car. I don't regret having to learn so much so early in my ownership as I did primarily get it to teach myself new skills, but there are days when I wish the previous owner had replaced more wear items before I bought it.


Between when I bought it in February and now I have:

  • Learned how to actually drive manual transmission
  • Replaced the valve cover gasket
  • Replaced the rear trailing arm bushings
  • Replaced rear exhaust hangers
  • Replaced the windshield cowl cover
  • Replaced the rubber gaskets around the door handles
  • Replaced the rubber gasket around the rear windscreen
  • Reverted to stock shift knob and e-brake handle
  • Installed aftermarket shift and e-brake boot
I've learned there's never a point where you are "done." There's always something else to fix, replace, tweak. It's not a cheap hobby in terms of money or time, but I believe it was worth it for the satisfaction I feel fixing and driving it.

Tuesday, July 2, 2013

Query Google Admin SDK Reporting API For Google Apps Domain, Output to Graphite

The "Reports" section in the Google Apps (gapps) Admin Dashboard has some interesting stats, but only retains data for the past 6 months. If you want to have that data for longer, Google now recommends the Admin SDK's Reports API instead of the deprecated Google Apps Reporting API. I did have a python repo on Github to query the old Reporting API and send the output to graphite, but I have I have since rewritten it to use the new API. It can be found here.

As with the Groups Settings API, you can use these Google Drive SDK instructions to set up a service account and get sample code to access the Admin SDK Reports API for your gapps domain using OAuth 2.0. It seems like most of Google's APIs are moving towards requiring OAuth 2.0 so this two-legged OAuth 2.0 (2LO) method of access for domain-wide delegation seems like it will soon be required for scripts that need gapps domain-wide authority. I just wish Google did a better job telling gapps admins how to set that up.

Tuesday, April 30, 2013

Google Groups Settings API for GApps Admins Using OAuth 2.0

TL;DR: Need to administratively access Groups Settings API (or other Google APIs that aren't in gdata)? Follow these instructions and try this example script.

As a relatively new Google Apps for Education (GApps) admin it's somewhat surprising to me how confusing Google's API ecosystem is. For python you have gdata and apiclient. Gdata is fairly well documented and only requires a GApps admin username/password for administrative access. You can use gdata to do useful administrative things for your domain like enable create and populate Google groups. But if you want to configure moderation or other Google group settings, you'll need to use apiclient.

My primary complaint with apiclient (the groups settings API in particular) is that the majority of the documentation expects you to be writing for 3-legged OAuth flow where some user must authorize your script in a web browser before it can access Google resources as that user. If you want to use GApps admin credentials to administratively manage group settings without user interaction (from authorizing requests):
If your application has certain unusual authorization requirements, such as logging in at the same time as requesting data access (hybrid) or domain-wide delegation of authority (2LO), then you cannot currently use OAuth 2.0 tokens. In such cases, you must instead use OAuth 1.0 tokens and an API key. You can find your application's API key in the Google APIs Console, in the Simple API Access section of the API Access pane. 
The problem? The apiclient python package has no documentation or examples of this. In fact, it seems to have removed all OAuth 1.0 functionality.

The solution? After many Google searches I finally stumbled across the following Google Drive (!!!) doc page: https://developers.google.com/drive/delegation. I created a service account, granted it access to the Groups Settings API in the API console, enabled the groups settings scope in our GApps admin dashboard, and downloaded the .p12 file for the service account. The only stumbling block was "user_email" (that gets passed as "sub" to SignedJwtAssertionCredentials() should be a GApps domain admin account.

Hopefully this helps someone. And hopefully someone at Google cleans up their documentation.



Sunday, October 21, 2012

The Long Road to Logstash

I'm a Splunk addict. I use it almost every day, primarily for problem investigation. So when we started going over our daily indexing limit every day at the start of this semester, I knew I was in trouble. After being locked out from searches for the second time I started to look for alternatives. I found 3 serious candidates:
I found things I didn't like about all 3 of them but Logstash was by far the most flexible. After a lot of confusion and frustration I finally have it at a point where it is useful. What follows are the things I wish I had known before undertaking this project. It more or less assumes you have had some introduction to Logstash. The most recent presentation from PuppetConf 2012 is quite good: http://www.youtube.com/watch?v=RuUFnog29M4

Logstash

  • "Use the grep filter to determine if something exists. Use the grok filter only if you know it will be successful." - This is a big problem with complex log format, like dovecot. I spent many hours trying to write a grok filter that would match every possible Dovecot log line. It's futile. Use the grep filter to grep for something like 'interestingfield=".*"' and add a tag to indicate the field's presence, then grok for 'interestingfield=%{QUOTEDSTRING}' on just that tag. Grok failures are bad. They add the _grokparsefailure tag, and they seemed to contribute to the next problem I ran into, watchdog thread timeouts.
  • Logstash, by default, has a 2 second timeout to all filter operations. If it hits that timeout, it will kill itself. I was probably getting this because I was trying to run logstash, elasticsearch and kibana all on the same underpowered development VM, but I think part of the problem was I was doing lots of grok filters that were failing. The recommended solution to the "watchdog timeout" is to run logstash under some system that automatically restarts it. On RHEL6-based distros (and debian-based systems) you probably have upstart. On RHEL5-based distros you can use inittab. There's a good upstart entry in the new logstash cookbook (http://cookbook.logstash.net/recipes/using-upstart/). For inittab you should have a wrapper script that waits a few seconds before attempting to launch logstash just so the old logstash process can give up the TCP ports it was listening on.


Elasticsearch

If you want to be able to search your logs with Kibana, logstash needs to output to Elasticsearch. Unless you already have experience deploying Elasticsearch, you will probably spend more time learning about Elasticsearch than Logstash. I was an Elasticsearch newbie, so some of these may seem like common sense to people familiar with Elasticsearch.
  • There are 2 plugins I would consider essential for running Elasticsearch: head (management), and bigdesk (monitoring). Paramedic is also extremely useful for monitoring multiple cluster nodes. If you decide to use the "elasticsearch_river" output, you will need a river plugin. Unlike other plugins, you *must* restart Elasticsearch after installing a river plugin (at least with the rabbitmq one). A good list of plugins: http://www.elasticsearch.org/guide/reference/modules/plugins.html
  • Elasticsearch is a java program, so you will need to tune the JVM to your machine. bin/elasticsearch.in.sh is a good place to start. I'd personally recommend setting ES_HEAP_SIZE and telling Elasticsearch to lock that memory. You will almost definitely need to increase the limit of open files. In RHEL it seems to default to 1024. Most recommendations for Elasticsearch are around 300k or 600k.
  • There are 3 "outputs" in logstash for elasticsearch. The "elasticsearch" output will run embedded elasticsearch and can either store data in itself or connect to an Elasticsearch cluster and send the data to that. I couldn't get "elasticsearch_http" to work with bulk indexing. I'm currently using "elasticsearch_river", which sends events from logstash to an AMQP (rabbitmq) queue which Elasticsearch indexes data from.
  • You need to configure the query cache. I went with setting the query cache to "soft" which lets the JVM garbage collection expire the query cache. For more info, see: http://blog.sematext.com/2012/05/17/elasticsearch-cache-usage/
  • Compress/optimize old indices.
  • Use mappings and the options of the special _source field to limit the number of fields Elasticsearch saves and their type. (http://www.elasticsearch.org/guide/reference/mapping/ and http://www.elasticsearch.org/guide/reference/mapping/source-field.html)
  • Best practice: Make a template that will be applied to all logstash indexes. I started with http://untergeek.com/2012/09/20/using-templates-to-improve-elasticsearch-caching-with-logstash/   

Monday, May 7, 2012

Compiling posix-winsync Plugin for 389 Directory Server on Linux

https://github.com/cgrzemba/Posix-Winsync-Plugin-for-389-directory-server is a plugin for the 389 directory server that enables the syncing of posix attributes between 389 and Active Directory. It was written for Solaris and I was unable to produce a working linux binary of it using the supplied files. I was able to compile and link it by hand in RHEL 6.2. To do this you will need the binary and devel packages for 389 and nspr. The pkgconfig files (.pc) for both of those should help you if my gcc flags or ld flags don't work on your system. Once you have those in place, the following commands in the directory for the project should produce a shared object file that can be copied to where your 389 directory server plugins are (for me, /usr/lib64/dirsrv/plugins).
  • gcc -fPIC -I/usr/include/nspr4 -DUSE_OPENLDAP -I/usr/include/dirsrv -I /usr/include/ -c posix-winsync.c
  • gcc -fPIC -I/usr/include/nspr4 -DUSE_OPENLDAP -I/usr/include/dirsrv -I /usr/include/ -c posix-winsync-config.c
  • gcc -fPIC -I/usr/include/nspr4 -DUSE_OPENLDAP -I/usr/include/dirsrv -I /usr/include/ -c posix-group-func.c
  • ld -shared -L/usr/lib64 -lplds4 -lplc4 -lnspr4 -lpthread -ldl -L/usr/lib64/dirsrv -lslapd posix-group-func.o posix-winsync-config.o posix-winsync.o -o libposix-winsync.so
At this point you should run
ldd libposix-winsync.so
to make sure all the libraries required by that file can be found. I had to create a new entry in /etc/ld.so.conf.d to point to /usr/lib64/dirsrv and run ldconfig for it to find libslapd.so.0. I'm not sure how the other 389 plugins worked without setting that.

Then you need to import import the ldif file that comes with the plugin in to your 389 server. The way the plugin seems to work is when you set up a windows sync agreement it will also sync posix attributes. If it cannot find a required attribute, it will not sync that user/group.

It would be nice to create an RPM of this and extend the plugin so the list of attributes it syncs can be dynamic/optional, but for now it gets the job done.