Quick guide to Vanity URLs in Adobe CQ5

Adding vanity URLs in CQ5 is a little cryptic and the documentation provided doesn’t help much. It’s especially not clear when domain mapping and URL rewriting has been configured.

How to create vanity URLs in CQ5

  1. First, find your content path to your page. It’s in the page path in the CQ author page, starting with /content. For example, /content/path/to/my/site/page.html
  2. Truncate this to content/path/to/my/site/page by removing the start slash and the page.html from the end. This is the first part of your vanity URL.
  3. Add your vanity name to the end of this. In our example we will have content/path/to/my/site/vanity-url. This equates to http://www.example.com/vanity-url (without the .html).
  4. Open the page properties for the page you want to appear on this address. On the basic tab, open the ‘vanity URL’, add a new item, set it to content/path/to/my/site/vanity-url and if you need it to redirect to the real page address check the ‘redirect’ option. Hit ok, and publish the page.
  5. http://www.example.com/vanity-url should now work as a vanity URL for the page.

What might go wrong with vanity URLs in CQ5

Vanity URL must be unique; it’s unclear what happens if you use the same value twice.

There’s plenty of ways this might not work, not least of all configuration, mod_rewrite rules and /etc/map rules.

Why use a vanity URL in CQ5

Vanity URLs are easier to remember, easier to share, and easier to type. They can provide a shortcut to a deep link within the site, and are a useful tool within marketing campaigns to promote specific content. In CQ5 it allows you to do this in a simple way that is controlled by the marketers who are authoring the content using a page property. It’s made a little more difficult than it needs to be because the full path needs to be added in some circumstances, but it’s worth the effort to make the most of this facility.

See also:


CQ5 developer wiki

I’ve started to put together a public wiki for CQ5 & AEM6 resources, unimaginatively titled ‘AEM Developers’. It can be found at http://www.aemdevelopers.com/ and is hosted on WikiDot. I’m using it as a place to promote links to articles that are useful, a blog roll, and profiles of developers and companies specialising in Adobe AEM. There’s an events section too that hopefully will fill out nicely as time goes on.

Please sign up and add any blogs or links to guides, and add comments to the pages that are already there. You’re all invited.

Vagrant VMs for Tomcat or MongoDB Developers

I’ve set up a couple of small GitHub projects to help development when using MongoDB or Tomcat 7.

Tomcat 7

This is a Java 7 + Tomcat 7 machine, with host folder mappings for ./webapps, ./log, and ./conf. It makes it really easy to deploy a .war file into the VM for testing assuming that no other configuration is needed – just put it in the ./webapps folder.


This was the original project, stemming from frustration at having to install different flavours of Mongo to learn and experiment depending on which operating system. It has a mapping of ./data on the host to /data on the guest machine, to preserve the MongoDB. 

Vagrant and Puppet

Vagrant makes it really easy to script VirtualBox machine configurations, and Puppet is a good tool to design machine configurations, so getting this to work was fun. The hardest part probably was getting the permissions on the shared folders in the Vagrantfile right; they are a little too open for my taste but this is intended to be a development environment and not used in production. You could take the Puppet manifests and use them on real servers if you wish.

The Vagrant base box

Ironically I’ve used a Chef base box for Vagrant from the Vagrant cloud for a configuration based on Puppet. There are a few reasons for this. Firstly, I has it already downloaded. At the time I needed one that was close to RHEL6. Secondly, amongst all the boxes I tried, this one seemed the most stable. Finally, I didn’t was to build a custom box yet as this one works just fine. It has a long startup first time as it installs updates and dependencies for the Puppet tool, but it’s otherwise good and seems to be as bare a machine as possible.


These projects are licenced as ‘unlicenced’ as in public domain. For more information, please refer to http://unlicense.org. You can do whatever you want with the code in the Git Repositories,

Five ways to prepare for AEM6

So, AEM6 has the option of using MongoDB instead of TarPM, and Solr instead of Lucene. What does this mean to the developers who work with CQ5, those who are looking to get into AEM, and the architects who are designing and planning implementations?

First up, developers should be proactive and research Mongo and Solr. An ideal place to start is Mongo University, who offer free certifications at the base level delivered over a number of weeks via YouTube.

Secondly, go download the software – both Mongo and Solr are open source. We’ll not know the exact versions needed to use with AEM6 until it’s released, but the differences are minor anyway when it’s used for learning. Both are highly documented and popular with plenty of online and printed resources to help learn how to best use their services.

Thirdly, be prepared for a barrage of CVs inbound for AEM roles with only Mongo / Solr experience because of the way some recruiters work by pattern matching. Some of these will be great developers, so don’t turn them away for a lack of CQ skills – if they have any general Java web dev experience they’ll do well. Also expect some CQ developers to take this opportunity to leave the niche and enter a larger market (which also has a skills shortage as far as I can tell). It’ll take a while for this to start, but it’ll happen.

Fourthly, ignore Sightly. It pains me to make that recommendation, but it’s a proprietry language so you won’t be able to really prepare for it’s arrival. 

Finally, expect to do extensive reading on which option to use depending on the situation. It’s a very flexible system using Jackrabbit Oak to orchestrate the persistence and indexing. You’ll probably need to do your own benchmarking and tuning regardless of the general recommendations but it’s better to start with the most likely choice.


A reason to calibrate colours on OS X

I don’t really do much image manipulation but I’ve found a great reason to calibrate colours on OS X – to make it easier on the eyes when doing development. The machine I use most is a five-year-old Macbook Pro – nowhere near the display standard on modern machines but tolerable once configured right.

Open System Preferences, and go to Displays. The second tab is ‘colours’, which shows a list of available calibrations. I find the default settings way off on all three Macs I use, too bright and too low gamma making it look washed out.

Hit the calibrate button, enable expert mode, and follow the instructions. Near the end, there’s an option for gamma [I stick to 2.2], and then option for colour temperature. You can click on the labels or move the slider. For development, I find D50 [5000k] is soft on the eyes, whereas D65 [6500k] has a better representation of greyscales. Anything higher than this looks blue to me.

Whatever you do, make sure it’s comfortable for you. I find that unless all my displays are the same colour temperature and gamma, it’s unpleasant to move between machines. 

The POJO antipattern and data-centric design

The POJO antipattern is established when developers create plain java ‘objects’ that have no behaviour, only getters and setters. It’s an insidious misuse of object orientation and causes application code to be too data-centric in the name of data-modelling, abstraction, and persistence when there are many reasons to avoid this style of coding.

The car analogy.

In a POJO world, imagine an interpretation of a car. Remember that you can only use getters and setters. You might end up with something like:

Car car = new Car();

All this is very well and good, expect it isn’t. By error, it’s possible to set a speed without a driver or a direction. It also doesn’t actually do anything once it’s set, there’s no logic. It’s like a car without an engine.

The worst part about POJOs is that it’s utterly pointless writing unit tests for them, making them an antagonist in the world of TDD. They artificially change coverage percentages purely by their presence, and help hide other code that should be tested.

What should a car do?

A better model of a car is by it’s behaviour; a mode of transport.

Car car = new Car(location);
car.drive(destination, driver, passengers);

Less code, more validation, better logic, and it’s still can be persisted using serialisable variables if needed.

Why you should avoid the POJO antipattern

Using POJOs in modern code forces the application internals to be data centric, verbose, and obscures the logic and requirements that created the application in the first place.  Instead you should model the behaviour of the object – what it does rather than what it is. Avoiding the POJO antipattern will help in the long term as you model behaviour and not the underlying data, and enables use of advanced software development techniques such as the naked objects pattern for generating user interfaces.

Avoid creating Microsites

Microsites are small promotional or campaign based sites intended to showcase a particular feature or as part of an event. They are often used to create a small block of content pages parallel to the main site in an attempt to circumvent controls, politics, technologies, or other constraints that govern the main site. They are a sign that the guidelines are too restrictive or are crafted in a way that makes it difficult to be creative. This might be because of true ownership of the content and the designs prevents the creation of campaign driven pages. Alternatively it might be perceived as quicker and cheaper to spawn a new site instead of working with the existing processed and solutions in place either because of the technology or the people. Another reason is that it may be too cumbersome to get official sanction to modify a live site to add a few pages, or that the development process to create the new templates is not agile enough. Finally, it may even be that your CMS is such that the developers are not able to turn it around in time so opportunities are missed unless a microsite is created separately to your main sites.

All of these concerns should be addressed.

Make it easier to avoid Microsites

If you aren’t using a CMS, get one. Steer clear of ones that are just click-and-create in favour of more structured approaches as it will pay off in the long run. Try to pick something that uses a component metaphor. Stick to mainstream technologies unless you have good reason to use something different – .net, Java, and PHP are all great choices. Use whatever is already in-house; it’s much cleaner this way and the transition will be less of a culture shock for the developers and system administrators.

Draft in a coach to guide the developers as they change their work patterns. Old habits take time to die, and new habits take weeks to be adopted. Start gently and build up over a period of time to grow the changes. Take an Agile approach to implementing Agile.

Look at how you an create useful and flexible templates within your CMS that can be driven by marketing. Use component driven development styles to make it easier to re-use content and code later.

Use link shorteners to allow you to link deeply within your live site. This alone might be enough to eliminate microsites if you can replace a long complex link with a short, easy, memorable one that links to a page on your current site.

Review your content guidelines and change them to enable short campaigns. You will need to consider the landing pages that are needed and the other page types that could exist in microsites and give enough freedom to create some great designs without compromising the rest of your site. URL structure is also important and the naming guides should have definite rules on what is or isn’t allowed.

Acknowledge that some content has a lifespan of years, and that some might only be around for days, possibly even hours. Links to campaigns might exist for a long time, so it’s much better that they are managed centrally in one system and redirections put in place after they expire. All too often microsites are ignored and left live after they have run their course. This looks amateur at best, and is illegal in some cases. A regulated system will be able to manage this automatically and should be part of the design of the metadata for the campaign so that the CMS can take action when the time comes.

Switch to use Agile methods within the development team with a view towards a programme of continual improvement. Shortening the turnaround of new work can be key to eliminating microsites, even if the first versions are light on features. It’s better to have something that works but is simple than to wait months for something that might not work or might do the wrong thing.

Get buy-in for broad changes to the site from the top level. This is key; they have to enable the changes and fuel the fire that makes it all happen. They can authorise more resources if they are needed, and make changes to streamline the process of mixing the existing content with new short-term marketing material. They will need to agree the new online guidelines particularly if it involves communication styles.


Get a CMS. Use Agile. Rewrite your guidelines and change your CMS code to make it easier. Link shorteners help. Create pages in the current site instead. Make sure the boss knows microsites are evil.

Some Java Tools

I’ve been experimenting more than usual with software that promises to make me more productive and happier in my work. Both are fairly broad and ambitious claims, and not without merit for the most part. I’ve looked at Maven, jRebel, jUnit, Infinitest, MoreUnit, and Spring Roo.


Maven is a brilliant solution to the big timewaster that is downloading and installing jar files in order to build a project, not to mention the chaos that ensues when the wrong versions are used and the tedious documentation that’s needed to capture the requirements just to build, never mind running the code. Maven goes and gets the right version of each jar, and the jar files for any dependencies too. It’s great, just a couple of lines of XML and it knows what it needs to do.

I’ve been using the Maven archetypes to create Hippo and Magnolia projects instead of using the downloads from the websites; both have demo content in that I don’t want. Building it from Maven gives me two big advantages. Firstly, upgrades are potentially as easy as changing a version number and rebuilding. Secondly, it grants a lot more control over included features or modules. Hook it up to a CI server such as Jenkins and it makes testing and feedback quick and easy.


If ever there was a development tool that sounds too good to be true, this would be it. Live changes to running code to eliminate restarts and redeployments to the JVM. For a lot of system development this is good, and has the potential to save a lot of time. It has shortcomings in the area of class hierarchy modifications, and it’s not really suited to OSGi systems such as Adobe CQ5. I’ve not found a use for it with Magnolia CMS either. However, it really shines with Hippo and for more bespoke work with the Spring framework.

One thing I did find out is that you need to add the jRebel nature in Eclipse; if you don’t it just doesn’t do anything. I’m on my second trial because of this – the first one ended with zero time saved.


As an experiment I elected for a test-driven style of development when porting some .net code over the Java. The general approach was to copy the code in class by class into Eclipse, and create and missing classes – these are the imported packages that form the .net framework such as System.IO. Eclipse derived the methods in the classes, and rather than try to write up the code directly I created a full test suite. Creating the code in the unit test classes made it much easier to work through the missing methods and know which ones were still left to do.


This free tool (available from the Eclipse Marketplace) was essential in getting the tests to work correctly. For the most part it works, and runs the tests whenever the relevant code changes. On the downside is that the runner ignores methods that throw Error instead of Exception. It was marking the test as passed even though an Error was thrown.


This is a pretty lightweight addon for Eclipse that does very little for me. It adds some views that show lists of missing tests for the current class, or the current project. It seems reasonable in what it does, but it doesn’t do much. At least it’s small and unobtrusive so I’ll probably leave it installed. There’s also a feature to do Mocks but I’ve not tried it yet.

Spring Roo

I’m not convinced by Roo. I’ve had a fairly quick play with it, looked at the book by Rimple & Penchikala, and the other one that O’Reilly publish. It seems fairly competent and feature-rich, but for some inane reason it’s not possible to preview or undo changes that it makes and this makes it hard to experiment with it properly. Sure, I could use version control but that just feels like overkill for a playground test. Combine this with the fact that all the GWT examples I tried didn’t work at all and broke the project completely and I think I’ll look elsewhere for tools to rapidly build web applications.


I’ll probably use all of these, but I might wait for Spring Roo to mature – it’s only on version 1.2.2 at the moment. Perhaps version 2 or 3 will offer more consistency and stability. Only one of these is a paid-for offering (jRebel) but it’s worth the cost in most cases.

Magnolias, Hippos, Gits, and the Blues

A summary of recent times…

Magnolia CMS development

I’ve been working on migrating news content from a PostgreSQL database into Magnolia CMS, which on a technological level involves reading records from the database and programmatically creating the pages in Magnolia. In reality it’s far more involved than this. The nodes created in the JCR store have to match the structure expected by Magnolia, particularly as the project is using the STK – this provides a set of page types and components that should be used. Thankfully Magnolia makes it easy to do this with it’s ‘export as XML’ feature.

The hard part of this project has been identifying which online samples and examples are for version 4.4 or earliest, and which are for version 4.5. The differences are quite significant, but my past experiences with JCR have shown the way. As a broad generalisation if you can do it with JCR, you should. The current version is a stepping stone to version 5, so using the JCR2 API should be future-proof; using the now deprecated 4.4 code style is a bad idea.

It’s been a lot of fun working out the Maven dependencies to make the project work exactly how I thing it should, and only include the modules that are relevant to the end result. In just six weeks, there’s now a comprehensive platform to build on, with a custom theme, customisations to the STK, new dialog designs, and lots of additions to the data modules that are installable as modules. It’s impressive, and it really highlights how good Magnolia is from a developers point-of-view.

Hippo CMS work

I’ve been busy working on a side-project using Hippo CMS 7.7. I wanted to build a document-centric site, based on articles and news for my own use. Hippo is perfect for this. I can define a document type by defining fields in a web interface, add a whole bunch of items, and write and edit to my hearts content. It’s an authors dream for publishing on the web. Unlike most CMS, this really is managed content. These are not pages, but true content with a structured type. The page navigation and URL schema are defined according to the channel they are presented in – be it desktop, mobile, or something else entirely.

From a developers point of view, this is a great system too. Models are in Java classes, and the default in the current version when created from the Maven archetype is to synchronise the changes in the JCR with the code on the file system. Thus, changes made in the CMS are automatically reflected in the XML files in the project. The documentation is pretty clear on the subject, and discusses the DTAP model well. In particular it explains what a developer should change, and what they should apply delta changes to – this is important to avoid obliterating settings made in the live environment. Overall, Hippo is a great product, let down only by some odd author UI choices and a lack of workflow on media items (they seem to automatically publish unlike the documents).

Exploration of Git

I’ve been looking for alternatives to SVN. Subversion has some serious deficiencies when it comes to branching and merging, not to mention working on the same code branch with it’s poor conflict resolution.

I’ve found that the command line version makes sense and provides lots of help. GitHub for Windows and GitHub for Mac clients are very good. Commendations go to SourceTree on the Mac, Git-Cola on Linux, and not much else on Windows as yet – there has to be a highly polished client with a great UI for the worlds most popular desktop operating system but I’ve yet to find it. Tower Git on the Mac looks amazing. It looks like I’ll have to make do with GitExtensions on Windows for the time being.

The big advantage of Git is that it’s possible to have a local repository as a working copy, and multiple remotes, and multiple branches. As a freelancer, this means I can have a working copy on my laptop, another on my desktop machine, my primary master on a server at home, and I can take the laptop to the client and push the changes into their repository. It all works so well.

Creating a .net project to MIGRATE CQ4 ContentBus into a database

This is basically file hacking, but not for the feint of heart. It’s a great chance to use the knowledge I gained five years ago about the internal structure of the Day Communiqué CQ3 and CQ4 ContentBus storage. I chose to do it in .net for a few reasons – the main reason was that the client has .net developers. C# is very good at the tasks that this project needed, namely access to the file system, string manipulation and database access.

Porting the .net application to Java

Finding out that the content to import will not extract onto an NTFS file system was a blow, so now the application is being ported to Java. I’ve chosen to enable this by creating a very small emulation layer that allows most of the code to stay the same. The only place I really need to make big changes was to the associative arrays used by the SQL client libraries – Java has no associative array concept.

Learning some basic blues chops on the guitar

We went to see Hugh Laurie at the Hammersmith Apollo, and we’re great fans of his album. I’m partial to a bit of blues, and in a recent exercise in tidying up I found a copy of ‘Beginning Fingerstyle Blues Guitar’ by Arnie Berle and Mark Galbo. I’d long forgotten about it, and having worked through the first 30 or so page I have to say that it is really rather excellent.

How to create a CQ5 component in Multiple Groups

This little trick might just help you if you need to have the same CQ5 component in multiple groups. If you are managing the design of the paragraph system using whole groups it becomes very easy to add a new component without the need to change settings in the CMS.

  1. create a new component
  2. remove the jsp script and the dialog
  3. add a sling:resourceSuperType property to the new empty component and set it to the real component
  4. add a group property and set it to the name of the group.

Technically it’s a new component, and places where it’s used will have content created with this component resource type. This is a good thing; it means your code for all the versions is in a single location – until you decide to branch and make the version for group X do something that doesn’t apply to group Y.