Assessment Report

Michael Barlow & Sam Sutherland-Dee
Estimated Grades
Section Estimated Grade
HTML A
CSS A
JS A
PNG A
SVG A
Server A
Database A
Dynamic Pages A
Depth 32
HTML
  • We have learned a great deal about how to correctly structure HTML, such that it is easy to read and debug, yet complex enough to create interesting pages. It is important to make sure that every tag is necessary and named appropriately, to ensure readabability both in the HTML file itself and the corresponding CSS. This helped us a great deal when debugging the formatting of our pages.
  • We decided to use Bootstrap 4 to allow us to build responsive designs. Bootstrap has pre-styled components and extensive documentation that enabled us to quickly build attractive pages.
  • Our pages scale and re-order to accomodate all screen dimensions, so it looks as good on mobile as it does on desktop.
  • We are using mustache.js server side templates. For more information about this, see the dynamic pages section.
  • Our site uses CDNs to retrieve our javascript librarys. This comes with the inherent risk that the CDN server could be compromised and these files corrupted (potentially maliciously) by an adversary. To defend against this, we include SRI hashes of each library which browsers can use to verify that the resources are as expected.
  • All of our pages are served as XHTML (when possible), and have been manually checked using an online XHTML validator to be correct.
CSS
  • As mentioned previously, we are using Bootstrap 4 as our front end framework and use many of the in built classes to style and lay out our site components.
  • We carefully override bootstrap styling in our stylesheet where necessary to ensure a consistent look and feel.
  • We have learned about a range of CSS selectors and used them consistently. All of our classes and ids are named appropriately to make our CSS as readable as possible.
  • We have experimented with CSS media tags to change the number of columns in our search page based on the screen width.
  • Our careful use of CSS ensures our pages renders as expected for all screen dimensions.
JS
  • We use the client side library anime.js to facilitate the animations on our page. This includes the search tab animation, soft loading animations, back to top animation and our logo animation. Making the animations fluid and professional was very time consuming, especially the search tab animation, but we are proud of the result.
  • We embedded several YouTube videos on the learn page, however if we embedded them directly, this would cause a massive overhead on load. Lots of additional files are loaded for each video and pushed our load time upwards of 5 seconds. To fix this, we instead embedded placeholder images and only once the images are clicked do we switch the image for an iframe with the embedded youtube video using javascript on the client side. The video autoplays so the user only has to click once. An example of this solution can be found in the PNG section (try clicking on the video placeholder image).
  • We have separated our javascript files so only relevant code is loaded for each respective page.
  • In a previous version of the site, we used AJAX to update HTML on the client side, however we found doing this in the server was more flexible so we migrated to this. For more information, see the Dynamic Pages section.
PNG
  • We constructed several PNG images using a variety of tools in GIMP.
  • To construct the placeholder images for the YouTube videos on the learn page, we first downloaded the thumbnails for the relevant videos. This was imported into GIMP and then a YouTube play button icon with a transparent background was added as a new layer.
  • To create the recipe information icons in GIMP was a little trickier. We needed to overlay complex selections to get the shapes we required. Some of these selections were filled and some were transparent.
  • The favicon was constructed with a filled rounded rectangle selection over a transparent background, with some text on a new layer. This icon matches the colour of our navigation bar.
  • To further our GIMP skills, we experimented with other tools and techniques, but did not produce anything suitable for display on our website.
YouTube video placeholder
George's Oysters
Recipe information icons
Servings image
Clock image
Book image
Favicon
Favicon
Favicon small
SVG
  • We constructed the logo, seen below, in Inkscape. We first added the text and then placed lines by inputting the svg paths manually.
  • To animate the logo using our javascript animation library, we needed the logo to be a single path. This required us to convert the text to an SVG path in Inkscape. When trying to integrate the SVG file into our site, we found there was unnecessary Inkscape metadata in the file. In addition, the positioning in the SVG file was relative to the corner of the original Inkscape canvas, not (0, 0). This resulted in our logo being offset from its intended position on our site. To resolve this, we first tried using SVG transformations to reposition the logo, however this was not a clean solution. Instead, we imported the SVG path into Gimp and subsequently modified and exported it. This not only removed the unnecessary metadata, but also relocated the co-ordinate system origin.
  • To animate this path, we tween the SVG stroke-dashoffset using a quadratic easing function, which gives the effect of filling the lines. This animation is displayed at the top of this page.
  • We also created the database SVG diagrams seen in the Databases section using online tool draw.io.
Logo
White logo
Server
  • We did considerable work on a server without express, but found implementing features such as argument passing cumbersome, so eventually we switched to express.
  • We are using a range of node modules to enhance the server, including:
  • We are delivering pages (and resources) both statically and with server side templates.
  • We have implemented HTML content negotiation for older browsers, meaning our pages are delivered as XHTML only when possible.
  • We set up local https certificates, but left it out of the submission as our self-signed certificate would require whitelisting in the browser.
  • We wrote a javascript file to auto-test how our server handles erraneous URLs. We ran this script (located at ./site/test.js relative to the submission directory) every time we made structural changes to the server.
Database
  • We have learned a huge amount about databases. Our database is a normalised SQLite DB with a structure illustrated by the Entity-Relationship Diagram (ERD) below.
  • To facilitate the many to many relationships in our data, our database features multiple junction tables which are subsequently naturally joined for lookup queries.
  • While our database is mostly static, we have implemented a schedule for the updating of the featured recipes table. Using the node-schedule module, the featured recipes table updates with 5 randomly selected recipes at 23:59:59 every day.
  • The database is automatically generated and populated using data scraped from BBC Good Food.
  • For more detail about both our data, and our database, see the Depth section.
High level summary of our data
high level database design
Implementation level database ERD
low level database design
Dynamic Pages
  • We are using mustache templates which are populated on the server side.
  • In previous versions of our website, we used AJAX to dynamically update pages on the client side (primarily for search results). This worked well, however using browser forward/backward navigation cleared previous search results. To fix this, we switched to server side templates, using them for every page to maintain consistency.
  • We have centralised the navigation bar and footer HTML to prevent repeating it for every page. Previously, we loaded this HTML in using client side JS, however we migrated to server side loading using mustache templates to minimise visual changes to the page on load.
  • We are using more complex mustache looping commands to populate our search results page with a variable number of recipes. The templates also check for the precense of optional data and render the page accordingly.
Depth
  • Our initial aim was to create a website where you can quickly search for recipes to cook. Unlike text only search bars available on most websites, we wanted to extend our search functionality to include other influencing data such as how many servings the recipe has, how difficult it is to make or what ingredients it contains. To facilitate this, we needed a large database and a scoring search query. We could not find any large, free recipe APIs or databases, so we had to collate our own.
  • We are very proud of our database, which is very extensive and was scraped from BBC Good Food. The scraping was completed using a python script, the relative path in the submission for which is ./bbc-recipe-scraper/scrape.py. It is worth noting that this script was not designed to be reliable or portable, it was built to scrape data a single time from a single machine.
    • The script first pulls the sitemap.xml file from BBC Good Food.
    • The script then parses and traverses this file using the ElementTree XML API. Relevant recipe pages are found using a regular expression, and saved for later use.
    • The script then traverses this list of relevant recipe page URLs, navigating to and subsequently downloading a local copy of them individually. A spoof User-Agent request header was required to access the URLs from code.
    • With the recipe pages all downloaded, the script parses and traverses the HTML using BeautifulSoup4. Data is pulled and sanitised from the page meta and content tags, and the recipe image is downloaded from the corresponding image URL. This data is then saved in a Python Dictionary.
    • Once the dictionary has been populated, the script then exports it as JSON and terminates.
    With the resulting JSON file (./site/all_recipes.json relative to the submission), a database can then be constructed. The SQLite database is built (or updated) using a javascript script, the relative path for which is ./site/create_recipe_db.js. This script requires only the recipe JSON as input. This scraping was done for educational purposes (and fun!), we are aware that this data does not belong to us and as a result we will not publish this website.
  • We are also particularly proud of our search page. The search functionality is entirely stateless; all the relevant information is encoded in the GET request URL. This includes the search pagination and results per page buttons which require unique rendering depending on their value. Search results are retrieved using a single SQL query with a scoring metric manually implemented by us, specifically for our database. For example, given the query 'cheese', points are assigned based on a string match with the recipe title (if 'cheese' appears at the start, even more points are awarded), as well as a string match with the ingredients and methods. Furthermore, difficulty and servings data can be added to this query to further refine results. Query validation and sanitisation is performed, and prepared statements are used. We also wrote a suite of auto-tests to boost our confidence in the integrity of our implementation.