John Vine

About John Vine

John carefully crafts a great user experience for the many types of visitors to college and university websites, while maintaining focus on best practices. He specializes in custom HTML, CSS and JavaScript, including server side logic for more complex solutions. Over the last 6 years, John has helped to optimize websites for Beacon clients Clark Atlanta University, Husson University, Hawaii Pacific University, CSU Pueblo, Lees McRae College. John holds a bachelor's degree in Game Development from Full Sail University.
21 04, 2016

Making PDFs Accessible (Section 508 Compliant)

By | 2016-10-31T10:35:56+00:00 April 21st, 2016|Categories: Web Development|

As 508 compliance becomes more and more crucial for higher ed and ecommerce sites, many developers are encountering Priority 1 issues regarding PDFs. Depending on who owns the document it might seem prudent to send the PDFs to a separate department (those content people who write words good) as the task of tagging each img and table and metadata can seem daunting to those used to working with code. Thankfully this task need not induce panic in us developers, as there is a rather fast method for cleaning up PDFs that I’m happy to share with you all (even you content folks with all them fancy vocabulistics).

A couple prerequisites before we begin: Microsoft Word, and Acrobat Pro DC, a trial of which can be obtained here: Click here to start your free trial of Acrobat Pro DC.

  1. Open the PDF you’d like to make accessible with Acrobat Pro DC
  2. In the right column expand the “Export” section. If the current document isn’t pre-populated select it, and then select Microsoft Word (.docx) and click “CONVERT”
  3. After the conversion is complete you’ll be able to save it to your local machine.
  4. Open the newly generated docx file in Word.
  5. The final step depends on the version of Microsoft Word, but version specific instructions can be found in each version by pressing F1 and scrolling down to the accessibility section. In Microsoft Word 2013 you’ll click on Save As, select the location you’d like to save the file to and click on PDF in the “Save as type” drop down, then click “Options…” and make sure “Document poperties” and “Document structure tags for accessibility” are both checked.

There you have it, accessible PDFs without the perceived headache associated with tagging PDFs manually. Enjoy all your new found free time!

10 03, 2015

Building a paginated listing with only HTML and CSS for Responsive Design

By | 2017-06-16T12:34:34+00:00 March 10th, 2015|Categories: Web Development|Tags: , , , , , , , , , , |

Most paginated listings on the web are managed either with server side logic, javascript, or a combination of both. With the first option, the user’s browser wait on network traffic back to the server and back down again. Even if the data is as succinct as possible, a bad connection or crowded wi-fi can make your site look slow. An example of this sort of method can be seen on google search results. Each page requires a new querystring and page load. With the second option, the html for each page can be loaded into the dom on the first page load and then a script will manage displaying each separate page or section of data as the user interacts with the page’s navigation. The downside of this approach is that if script is disabled, as is so often the case on computers with high security settings, the only ways to recover are to display either a single page at a time using server side logic, or to display the entire listing without pagination at all, which can make for an enormously long page. The last option uses a combination of both, and can result in a smoother user experience, but also then suffers from the drawbacks of both. This last option can be seen in effect by scrolling to the bottom of a facebook page; the first “page” of content is loaded up first, and only by scrolling down is additional content then called down from the server and added to the page via javascript.

I think that’s enough of the bad news! You’re here to find a way to circumvent all of these headaches so that you can give even the most restrictive browsers as much access to your content and with the very best user experience possible.

Lets get started shall we?

We’ll start with a sample of how you might list results currently. For ease of reading and instruction we’ll keep the content short and sweet, and we’ll keep our pages limited to 3 results per page.

We have to make a few changes to this to make it work in pages. Let see what that looks like:

Now that we’ve got them separated into page containers there’s one thing that’s important to make note of. You may have noticed that page1 is actually listed last. It’s also important to note that the order of all pages that aren’t the first page isn’t relevant to this process, only that the page you want displayed by default, or first, is the last child of the .pages container. This is so that we can take advantage of a special kind of css selector in order to get around css’s inability to select nodes up the dom. If you intend to use my advanced method below however, it is necessary to display the pages in order with the only exception still being that the first page be listed last.

That selector is called the General Sibling selector, which is the last selector outlined here. This selector selects all elements that match the right hand selector that are siblings of the left hand selector only as long as they appear in the html after the left hand selector. In our sample, a rule for “#page3 ~ .page” would select #page2, and #page1, but “#page2 ~ .page” would only select #page1.

We’re going to use that selector to hide all pages that aren’t the first one.

This ensures that when the webpage loads, only one page of your list will display, and that it will be the first page. In order to give the user a way to change the pages, we need to create a page listing in our html.

The new .pageNav element can go anywhere on the page, but it must not be added as the last child of the .pages div so that our previous rules will continue to show and hide the correct divs. To get those links to functionally show their respective pages we need to make an additional tweak to our css:

These new rules showcase the two selectors that make this whole thing work. One we’ve already discussed. The new one is the :target selector, which applies to an element if it has an id that matches the hashtag (for lack of a better word) in the url. When one of the nav links is clicked, it appends it’s href to the url. A live demo of this in action is available here.

The key to this html/css trick is just outputting your first page last, which is easily possible in most content management systems. The rest of it is very simple. This solution is responsive, is 508 compliant, and search engine friendly.

Another benefit this method has over javascript is that it take advantage of the browser’s ability to remember the previous page’s hash tag, which means that if a user follows a link in your paginated listing, and clicks the back button, they’ll retain their active page.

To be fair to the other pagination methods above, this too comes with a drawback, although it is in my opinion a minor drawback. In order for this method to be fully encapsulated by html and css all html content for all pages must be loaded on the first page load, which can slow down the page if the content of each page is exceptionally large. In the case of most paginated listings however, that is rarely the case.

Advanced

Now that the base concepts have been outlined, I’d like to show you what I consider to be the correct and most exhaustive solution for applying this sort of method to a responsive site where we don’t want to limit the user to only the core html/css experience if they’ve allowed scripts on their browser.

NOTE: This implementation uses jQuery for ease of understanding, and portability. jQuery is, however, a very large library and might not be the most efficient way of accomplishing this from a performance perspective.

First, lets look at some easy changes to the html:

Specifically what we’ve done here is add previous and next buttons to each page. The reason these have been added here, and not with the rest of the navigation buttons is because without javascript enabled, we can’t properly style or control the behavior of them dynamically, so we need static but functional links that we’ll only render on our mobile view. Functionally, at smaller screen sizes (like smart phones), requiring the user to click on one of several small links next to other small links can be frustrating, and giving them the same pagination functionality, but restricting them to only previous and next navigation is a simple process that drastically improves user experience. For all of these changes, we’ll need to update our css:

You’ll notice several new lines, so lets break them down.

  • The first change is to our existing rules. We’ve added :not(.js) to the .cssPagination selector. This is going to enable us to maintain css control over the general layout of the elements, even if (when) we upgrade the functionality with javascript.
  • Next we’ve taken the liberty to hide the next and previous buttons by default, so that we don’t have two lines of navigation controls for the user to interact with.
  • The next 3 lines are going to work specifically with the two new classes (js and paginationActive) in order to give our script the ability to manage pagination without having to get its hands dirty mucking about with styles.
  • Last is the media query. If you’re familiar with responsive design and development you’re already familiar with what this does. For those of you who might not recognize the tag: this call “@media screen and (max-width: 480px)” tells the browser to only enable these rules when the device reading the browser has a screen, and the current resolution is 480 pixels wide or less. For more information on media queries read here.

Lastly we’ve added a script section, which will take over pagination when scripts are enabled (and in our case when jQuery is present):

To see this all together in action click here. To see how this setup looks without javascript enabled all you need to do is set addInjQuery at the top of the script file to false. The demo also demonstrates how the previous and next buttons respond in a responsive manner to the available screen resolution both with and without the script enabled.

NOTE: To use this script as is, I recommend using the version shown here and not the one in the demo, as it is hard-coded to a specific version of jQuery, rather than this version which is compatible with virtually any version of jQuery that you might already have on your site.

The script that was added now only works when scripts are enabled and jQuery is available. This allows the content to be enhanced progressively depending on what features are or aren’t enabled on your user’s browser and device. This ensures that no matter what device or browser your client is using they will get the best possible experience from your site in the most efficient manner possible.

To recap this method for how to paginate a listing on a web page is responsive, not dependant on javascript, not dependant on posts to the server (either synchronous or asynchronous), 508 compliant, and in most cases incredibly efficient. It is also heavily customizable to fit in any website design in any language.

That’s all for this topic. To read more by some of our other incredibly talented staff see below:

28 10, 2013

Parsing xml feeds with javascript

By | 2016-11-22T09:52:31+00:00 October 28th, 2013|Categories: Web Development|Tags: , , , |

There can be any number of reasons to parse xml. Perhaps you want to set up a custom search of an RSS feed. Maybe you need to render a data report from your ERP system on a web page but can’t get an html output from your ERP provider. Whatever the reason, parsing xml with javascript is very easy.

To start you’ll need three things. For simplicity we’ll include jQuery 1.5 or later. Next download and include this xml to json converter: http://www.fyneworks.com/jquery/xml-to-json/. Finally you’ll need some xml. My examples will use the following xml feed:

We’ll say my xml is hosted at www.mydomain.com/musicians.xml. In order to search this file you’ll first pull the file into scope with a jQuery get request like this:

That will pass the xml from the file requested as a parameter to the callback function which can be defined either inline as in the example above or as a standalone function for a more modular approach. Inside the callback you’ll need to convert the xml into a json object for easy manipulation. This is done via the following line of code:

At this point the rest is very straight-forward. The xml is now in a json object and can be accessed by the following ways:

Now you have access to all the data in the xml as a single object making data retrieval very simple, even for complex xml data feeds like RSS.

20 08, 2013

3rd Party Plugins, A Cautionary Tale

By | 2017-06-16T13:14:25+00:00 August 20th, 2013|Categories: Web Development|Tags: , , |

One of the many questions you’ll have to answer when developing your website will be whether to use custom built in house solutions or easy to integrate 3rd party systems. It’s certainly easy enough to say that the cost of most 3rd party solutions will be lower than custom built software, but lets explore what you might be risking for that decrease in initial cost. When you entrust certain aspects of functionality in your site to a 3rd party system you are suddenly vulnerable to a number of risks that you should seriously consider before making a decision.

If the solution is built the best way it can be for speed then it’ll be something hosted on your own servers. You’ll install it and configure it and might even have an admin panel to maintain it. What you don’t have though is the ability to easily change it. Any changes you want, either for personal preference or because of errors in the software will need to be made by your own development staff (unless the software offers a support package that you’ll undoubtedly have to pay extra for). Your internal developers may or may not be familiar with the code, or it might not be in a language they are fluent in, or it might have been compressed to the point that it’s either a nightmare to edit, or it can’t be edited at all. This means that if the software you purchased utilizes deprecated html, script, or css that it might not work with newer browsers and updating the software you bought and became accustomed to might be more expensive than it would’ve been to have it built from scratch by your in house team.

What if your 3rd party solution is hosted externally, like many of the available google services. That way the onus is on the 3rd party to maintain their services and ensure they are up to date with current standards. The drawback in this case would be that your site’s functionality is now dependent not only on your hosting servers and their maintenance, but also on that 3rd parties servers and maintenance. One mistake on their end could make it impossible to submit a contact form on your site, or change how taxes are calculated, or even bring your whole checkout process to a halt. Even google who many think might be infallible suffered a 5 minute blackout this past weekend where 50% – 70% of all requests received an error. This might sound like a small window, but it doesn’t take 5 minutes for an error on your ecommerce platform to divert lots of revenue to your competitors.

With larger solutions, like tax providers or shipping providers, those might be the only problems you’ll face, but with smaller plugins you might even have a third potential chink in the armor. What if you’ve downloaded and installed a plugin and the original author decides to stop supporting that plugin? This is becoming more and more of a problem with amateur and hobbyist developers writing quick plugins to solve current problems and releasing them without support on the internet. They’ll probably work at first (after all if they didn’t you wouldn’t have bought it), but as browsers update and standards change and older practices become deprecated, you can bet that your un-managed, unsupported plugin has a shelf life, and the more complex the plugin, the closer the expiration date is.

With all the dangers of relying on 3rd party solutions you might be wondering why anyone would ever choose to put their faith in something written by an outsider. The answer is simple. Start up cost. It’s much cheaper to go with a 3rd party solution that’s being used by other sites on the web, and in most cases, the plugins will be in better shape and have more functionality than something developed in house so long as you do your research and ensure that this solution has worked for others before you. Don’t forget though, that when you decide to rely on an external vendor for your own site’s functionality, that your site is now dependent on more than just your developers and your hosting company. It’s now dependent on a network of vendors that you’d be wise to keep track of.

7 02, 2013

Creating An Accordion List Without Javascript

By | 2016-11-18T14:23:23+00:00 February 7th, 2013|Categories: Web Development|Tags: , |

Many sites implement accordions for everything from menus and navigation to product features to calendar event details. The fact is that accordion lists are among the most popular implementations of javascript and can be seen in some implementation on almost every web application and most advanced web sites. With the recent explosion of mobile devices accessing the web there has been a lot more value placed on responsive design. One facet of that has been developing sites without relying on javascript which has been the most common way of generating rich media experiences for users. Some things simply can’t be done without javascript, but other things can be accomplished with a bit of innovation. Accordion lists are just one of many things commonly done in javascript that can be done even in browsers either with scripts disabled or not supported at all.

A typical html snippet for a list might look like this:

A script would then operate on the items in order to provide them the desired behavior to expand and contract on click or hover. With a few simple modifications that same behavior can be accomplished without needing to rely on scripts being enabled on your customers browser (changes in red):

We’ve changed the h3 tags to labels and added for attributes pointing at the radio inputs that have been added next to the content that we want to toggle between displayed and hidden. The first two rules dictate which content is displayed based on the state of the radio button that precedes each content section. The third rule hides the actual input since it’s unlikely you want random input tags cramping your style (pun intended). The fourth rule simply provides feedback to your users on hover that the element is clickable. If you want to enable the expansion on hover the second rule can be changed to “#accordion li:hover div { display: block !important; }” and take special note of the !important attribute on that style. This is because the nature of styles will dictate that the first rule overwrite the second due to its specificity.

Obviously one of the drawbacks of this method is the lack of animation since animation was only possible with javascript prior to html5. The good news is that once this html/style accordion has been implemented you can also add scripts to you page to implement the accordion using script for those users that do have script enabled and render the styles we added for the scriptless version inert by simply removing the radio inputs from the DOM. Now your accordion can expand and contract both with and without script, though admittedly it will probably look better when scripts are enabled.