I have recently been asked to help a client with setting up their server environments and to figure out a development workflow so the site can be moved from dev -> staging -> live in some manner. Tired of the way I was doing things myself, I took it as an opportunity to see some of the ways developers were setting up their development workflow.
A few months back, I blogged about creating dynamic migrations. With a small amount of code, you can do something very powerful. You can bring in large amounts of data that need to fit into different places with one simple class. And when all of these containers are holding close to the same kind of data, it makes it an obvious choice. Commerce Migrate approaches migrating data from Ubercart to Commerce in such a way and does a great job of bringing over the core fields of an Ubercart product. But what do you do when you need to add additional sets of data for a particular type of entity bundle? The client that needed my help had various kinds of information attached to their products - taxonomy terms for various vocabularies, additional image fields, text fields, stock, etc. Fields that do not get associated with Commerce products / product displays in the initial migration. When I initially saw this, I was completely stumped - it meant rewriting all the dynamic migrations that were being done by commerce migrate as actual migrations (not a task I was looking forward to given that I would essentially copying/pasting code to get the desired effect without actually using Commerce Migrate).
Yesterday evening, I was working with a client on their site who are doing some interesting things with one of their custom search pages. They send ajax requests to the backend to get 2 types of values for their user:
- A count on the total number of a node type X that matched the criteria
- A count on the total number of another node type Y that is referenced in node type X (Y can be referenced multiple times by various X but for this, we just want to get back that value.
Instead of opting to go with straight database queries to get the data, they were using the EntityFieldQuery manner to get the initial list of X since they were using fields. Its not quite as fast, but its a much more flexible approach (and if they opt to change their field storage in the future to something like MongoDB, they can have something really fast without having to change a single line of code!). The one problem with EntityFieldQuery, however, is that it will only return back a listing of entity IDs. Meaning that if we want to get other pieces of data, we have to load up the entity. In their scenario, the only other piece of data that they wanted to retrieve was the reference field data. And performing an entire entity_load (or node_load to be specific) would mean they would also need to load up the 50 other fields that they are storing. So doing a retrieval like this on uncached content meant that the retrieval of this data alone took 3 to 4 seconds.
As many folk know (and as folk can see by my posts on the topic), I am very big fan of Migrate. It takes a while to figure out what you want to do and how to do it, but the power is absolutely immense. And having complete control over the source object down to manipulating the destination node/user/entity while still working within a framework has made this my favourite module.
We had a meetup on High Performance Drupal last night (see link) and being a fan of high performance systems/applications, I attended. Robert Brown did a fantastic presentation on using newrelic to diagnose potential issues in your application stack (it has hooks specifically for drupal in addition to the other niceties such as apache, memcache, solr; I didn't realize you could also monitor your site with it). I look forward to talking with him more on the subject of performance in the future.
To get myself more fully acquainted with Nginx, I decided to finally take the plunge and move out of using Apache. To note, I do not have any real problems with Apache; it has treated me very well for years; It is flexible and very easy to configure. I've also been using Varnish as an HTTP accelerator on this blog for a few months and given that I'm dealing with anonymous folk viewing content, the setup was more than adequate. But I have also heard great things about nginx for the past couple of years...
As part of a lightning talk, I did a lightning talk at a Drupal Meetup in LA on the Field Collection module for Drupal 7. For those that do not know, Field Collection is a successor to the multigroup module in CCK from Drupal 6. It allows a user to associate a grouping of fields as one field to for any type of entity (including another field collection so you can have nested field groupings inside other field groupings).
Since the last time I posted on using the JQuery UI Datepicker for event navigation, we launched the site that I built out this functionality for (if you are interested, visit REDCAT). A few months ago, we received a feature request to extend the set of functionality by displaying (or highlighting) the list of events that occur in a given month.
I was recently tasked to import data from a deprecated database table (Q&A) into a Drupal 6 site. Regarding the data that was being imported in:
- It is a one-time import.
- It boiled down as one flat database file
- No new content of this type will be added again. Ever.
- It should not appear in search results
- They need an easy way to go through all the data.
- It would be great if it could be filtered through and made searchable.
Last week, Jakub Suchy proposed an action for Drupal developers to contribute 30 minutes each day for 5 days (this week) towards the Drupal community. This can involve anything from providing support on forums/irc to writing/reviewing patches to documentation (and so on and so forth). So I am laying out some of my plans and join us in making Drupal better for everyone.