The ls
command is used to list the dependency tree of a specific package in your Node.js project.
npm ls packagename
Example output:
npm ls third-package-name
myproject@0.1.0 /Users/username/path/to/myproject
├─┬ package-name@3.12.0
│ └─┬ another-package-name@4.12.0
│ └── third-package-name@9.17.0
├─┬ ...
When you perform security scans and audits with Snyk or Fossa or a similar tool, you may discover that for example third-package-name@9.17.0
has a vulnability. Updating the library or package using this transitive package is the best, but if this is not possible your could pin it like this.
This package modifies package-lock.json
to force the installation of specific version of a transitive dependency (dependency of dependency), similar to yarn’s selective dependency resolutions, but without having to migrate to yarn.
npm i -D force-resolutions
"resolutions": {
"third-package-name": "^12.0.0"
},
Where as an example 12.0.0
is the updated version that doesn’t have vulnabilities
"preinstall": "npx force-resolutions",
Install again
npm i
Now, when you check the dependency tree agin, you can see it has been fixed:
npm ls third-package-name
myproject@0.1.0 /Users/username/path/to/myproject
├─┬ package-name@3.12.0
│ └─┬ another-package-name@4.12.0
│ └── third-package-name@12.0.0
├─┬ ...
You can create a symlink like this.
In the package:
npm link
In the project that uses the package:
npm link my-package
Now the packages are linked.
To unlink:
npm unlink my-package
Install by just referencing the local path inside package.json
{
"name": "my-project",
"dependencies": {
"@myuser/my-package": "file:../lib"
}
}
And then just install
npm i
If you want, you can package the local package and use it locally. Here you are using the same artifact that gets uploaded to npm.
npm pack
This command is generating this file my-package-X-X-X.tgz
. Where X.X.X is the semver version.
You can now reference this tarball directly in your main project inside package.json
"dependencies": {
"@myuser/my-package": "file:../lib/my-package-X-X-X.tgz"
}
import "@myuser/my-package/dist/index.css"
Just one example of importing a file from the package.
Maybe you are developing a package and making a Documentation website for it, or another project that is using lib
.
.
├── docs
├── lib
Then you can install the package’s dependencies upon installation in the docs
project. Using the preinstall
built in npm script, it will install this first upon npm i
in he docs project.
"scripts": {
"preinstall": "npm install ../lib/"
}
This is the simplest option. When you do this, images won’t be processed in any way by Gatsby.
static/images
![A blue bicycle](/static/images/bicycle.jpg)
This is easy to miss in the Docs. If you do this, the images will be processed by sharp and appear as if you placed them in a gatsby-plugin-image component. So you benefit from allt he goodies in, responsive images/placeholders/modern image formats and more.
src/images
npm install gatsby-plugin-image gatsby-plugin-sharp gatsby-source-filesystem gatsby-transformer-sharp
gatsby-config.js
or if you use Gatsby 5+ and ESM you edit gatsby-config.mjs
plugins: [
"gatsby-plugin-image",
{
resolve: `gatsby-plugin-mdx`,
options: {
gatsbyRemarkPlugins: [
{
resolve: `gatsby-remark-images`,
options: {
maxWidth: 1200,
},
},
],
},
},
"gatsby-plugin-sharp",
"gatsby-transformer-sharp",
{
resolve: "gatsby-source-filesystem",
options: {
name: "images",
path: "./src/images/",
},
__key: "images",
}
]
![A blue bicycle](../images/bicycle.jpg)
Read more about images in MDX
The two image components <GatsbyImage />
and <StaticImage />
is meant to be used in JSX, not in MDX. You can however, if you want; create a separate Image component.
import React from "react";
import { graphql, useStaticQuery } from "gatsby";
import { GatsbyImage, getImage } from "gatsby-plugin-image";
export const Image = ({ src, alt }) => {
const data = useStaticQuery(graphql`
query {
allFile {
nodes {
relativePath
childImageSharp {
gatsbyImageData(
placeholder: BLURRED
formats: [AUTO, WEBP, AVIF]
)
}
}
}
}
`);
const imageNode = data.allFile.nodes.find(
(node) => node.relativePath === src
);
if (!imageNode || !imageNode.childImageSharp) {
return <div>Image not found</div>;
}
const image = getImage(imageNode.childImageSharp);
return (
<div>
<GatsbyImage image={image} alt={alt} />
</div>
);
};
And then use like this:
import { Image } from "components/Image"
<Image src="cycle.jpg" alt="An alt text"/>
The presence of jsconfig.json file in a directory indicates that the directory is the root of a JavaScript Project. The jsconfig.json file specifies the root files and the options for the features provided by the JavaScript language service.
Tip: jsconfig.json is a descendant of tsconfig.json, which is a configuration file for TypeScript. jsconfig.json is tsconfig.json with “allowJs” attribute set to true.
VSCode uses a jsconfig.json file to aid your JavaScript language service and significantly improve your development experience.
Create a file in the root of your project jsconfig.json
{
"compilerOptions": {
"target": "ES6",
"module": "ES6",
"jsx": "react",
"checkJs": true,
"allowSyntheticDefaultImports": true,
"experimentalDecorators": true,
"resolveJsonModule": true,
"moduleResolution":"node",
"baseUrl": "src",
"paths": {
"*": ["*", "src/*"]
}
},
"paths": {
"components/*": ["./src/components/*"],
"utils/*": ["./src/utils/*"]
},
"typeAcquisition": {
"enable": true
},
"typingOptions": {
"enableAutoDiscovery": true
},
"compileOnSave": true,
"allowJS": true,
"include": ["src/**/*"]
}
If you have set up Webpack aliases in gatsby-node.js
, to make your imports nicer. Ex like import { Stuff } from "components/stuff"
you can specify this in paths: {}
If you import JSON data you can use json"resolveJsonModule": true
. With that being done, you should be able to import a JSON file as a JavaScript module.
To make your configuration smooth and play nice with VS Code I recommend these settings.
{
...
"[javascript]": {
"editor.formatOnSave": true,
"editor.formatOnPaste": true
},
"js/ts.implicitProjectConfig.checkJs": true,
"javascript.validate.enable": false,
"[scss]": {
"editor.formatOnSave": true,
"editor.formatOnPaste": true
},
"editor.codeActionsOnSave": {
"source.organizeImports": true,
"source.addMissingImports": true
},
"emmet.includeLanguages": { "javascript": "javascriptreact" }
...
}
Notes:
You can use javascript.validate turns off all error reporting for JS files. But it doesn’t change how TypeScript operates. You need to set javascript.implicitProjectConfig.checkJs
to specify the settings for an implicit jsconfig project. This is the equivalent of having a jsconfig.json
with the contents { "compilerOptions": { "checkJs": true}}
Emmet is always handy to have. You can include "emmet.includeLanguages": { "javascript": "javascriptreact" }
to enable Emmet support for JSX. The makes typoing HTML in React (.js) so much easier. And faster.
Now you can get full auto-complete and file browsing support in your IDE!
]]>If you’re looking for a fast and highly extensible launcher, Raycast is an excellent choice. As someone who has used Alfred for a long time, I recently made the switch and have been impressed with the features and functionality of Raycast. One of the things I often do when listening to music is browse Discogs to explore new albums and records. With this script command, I can open Discogs and automatically search for the current album I’m listening to, which I find very handy. To fetch the music I’m currently listening to, I use LastFM to scrobble tracks from TIDAL. Many music services and apps supports LastFM natively, or as plugins. Even though some apps supports getting the currently playing track/artist/album via Apple Script, it’s not possible for all apps. That’s why I like the option of using LastFM, since I already scrobble all my music.
Raycast executes your local scripts and enables you to perform frequent and useful commands without having to open a terminal window. Getting started with Raycast Scripts
Under Raycast Preferences you can go to “Extensions → Scripts and find your scripts. Manage them, set aliases, record a hotkey and more.
When you launch Raycast, type “script” and key down to “Create Script Command”. Select template, mode, title, description and more.
All scripts have parameters to instruct Raycast on how to process and output your request. You can read more about them here. In the example, we have set the @raycast.mode
parameter to silent, which means that the script will run instantly and silently, allowing you to access the newly opened page when you are ready.
#!/usr/bin/env bash
# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title Discogs Now Playing
# @raycast.mode silent
#
# Optional parameters:
# @raycast.icon icon.png
# @raycast.iconDark icon.png
#
# Documentation:
# @raycast.description Open Discogs and explore the currently playing album
# @raycast.author Urban Sanden
# @raycast.authorURL https://urre.me
# Read secrets from the .env file
source "./.env"
# Specify LastFM API and specify JSON as the format
URL="http://ws.audioscrobbler.com/2.0/?method=user.getrecenttracks&user=${DISCOGS_USER}&api_key=${DISCOGS_API_KEY}&format=json"
# Query the LastFM API with cURL
result=`curl -s ${URL}`
# Get currently playing track using jq, Artist and Album name, flatten and remove whitespace
nowplaying=`echo ${result} | jq -r '[.recenttracks.track[0].artist["#text"], .recenttracks.track[0].album["#text"]] | flatten[]' | xargs`
# Now open Discogs in the browser
open "https://discogs.com/search?q=${nowplaying}"
I’m using jq
. It’s a lightweight and flexible command-line JSON processor. You can install it with brew install jq
Create an .env file with your credentials
DISCOGS_API_KEY="XXX"
DISCOGS_USER="XXX"
Play some music, open Raycast and start typing Disc…
]]>Cloudinary is awesome. It’s cheap, fast, stores a lot of images even in the free tier and doesn’t take up data space in your apps and projects. You images loads fast everyhere with the CDN and you can also create transformations programmatically and on the fly without the need for graphic designers and fancy editing tools. Change formats, resize, crop and more.
Wouldn’t it be nice to upload images from your computer, ex blog images, screenshots etc directly to the Cloudinary Media Library and then be able to get back and paste a CDN link anywhere?
Prerequisite: To use the Cloudinary CLI, you need Python 3.6 or later.
To use it in scripts or in Automator, first install the CLoudinary CLI
pip3 install cloudinary-cli
To make all your cld
commands point to your Cloudinary account, set up your CLOUDINARY_URL environment variable.
export CLOUDINARY_URL=cloudinary://XXX:XXX@yourname
Check your configuration by running:
cld config
for f in "$@"
do
/opt/homebrew/bin/cld uploader upload "$f" | /opt/homebrew/bin/jq -r '.secure_url' | /usr/bin/pbcopy
done
I’ve added a pipe command here to copy the URL to the uploaded image into your clipboard. You’ll need
jq
to parse the JSON and extract the HTTPS link.
This screenshot is in Swedish, but you get the idea.
But, what if you need to create a little collage or moodboard from the images. Well, there are a number of services that can download Instagram posts, but I wanted to use something else. Just some JavaScript in the browser developer tools, cURL to download the images, and ImageMagick to stich together a collage. Let’s go!
Note: this is meant to be used for personal use only.
The things you need to do this is ImageMagick, a web browser of choice and a terminal.
ImageMagick is free software that you can use to create, edit, compose, or convert digital images. It runs everywhere, on Linux, Windows, macOS, iOS, Android OS, and more. It is distributed under a derived Apache 2.0 license.
It’s easy to install, you can install ImageMagick using Homebrew
brew install imagemagick
Login to Instagram on the web and click on your profile image → Saved. The URL looks like this:
https://www.instagram.com/USERNAME/saved/example-name/XXXXXX
The devtools live inside your browser in a subwindow. To open in Chrome, press Ctrl Shift J (on Windows) or Ctrl Option J (on Mac).
Click on the Console tab and paste this code:
imgs = document.querySelectorAll('article img')
downloadCommands = []
imgs.forEach((item, i) => {
downloadCommands += `curl ${item.currentSrc} --output ./images/${i+1}.jpg\n`
})
// Copy the list of download commands to your clipboard
copy(downloadCommands)
All the images from the Saved collection will now download into a folder called “images”, relative the current path. If you are on your desktop the path will be ~/Desktop/images
.
Create color palettes from the images. Use this command to exctract unique colors from the images. I just used one dominant color from each image, but you can customize how you want.
convert ./images/*.jpg -colors 1 -unique-colors -scale 20000% -resize 1280x1280! ./images/color-scheme.jpg
This creates a color scheme collage from all the color palettes. Grab all the color palette images and add them to a single collage.
convert ./images/color-scheme*.jpg -gravity center -background none -extent 512x512 miff:- | montage - +repage -background white -tile 4x3 -border 80 -bordercolor white -geometry +5+5 collage-color-scheme.jpg
This command creates the moodboard, with all the images and the color palettes. I used a grid with 4 columns and 3 rows.
convert ./images/*.jpg -gravity center -background none -extent 512x512 miff:- | montage - +repage -gravity center -background white -tile 4x3 -border 80 -bordercolor white -geometry +3+4 collage.jpg
If you want to make PDF to be more printer friendly, you can specify page size as A4 or Letter.
convert collage*.jpg -background white -compress JPEG -quality 65 -page a4 moodboard.pdf
A simple example on how to create a little web based responsive moodboard using CSS Grid.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Moodboard</title>
<style>
.grid {
--auto-grid-min-size: 200px;
--gap: 4vmin;
display: grid;
grid-template-columns: repeat(auto-fill, minmax(var(--auto-grid-min-size), 1fr));
grid-gap: var(--gap);
margin: var(--gap);
}
</style>
</head>
<body>
<div class="grid">
<img loading="lazy" src="images/1.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/2.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/3.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/4.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/5.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/6.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/7.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/8.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/9.jpg" alt="Image description" width="200" height="200">
<img loading="lazy" src="images/10.jpg" alt="Image description" width="200" height="200">
</div>
</body>
</html>
Note: I’ve been using Gatsby, but this example can be applied in any React site.
Building a good UI is hard and it both involves technical, copywriting, UX and design skills to do well. This won’t be a long post on the why and what A/B Testing is. There is a lot to read in this topic though. How to do a statistically significant experimentation and so on. However, it’s very important to research and think about why you want to test something.
You might have Google Analyics tracking. You can see the number of visits, dropoffs, maybe also track conversions. But what do you do when you want try variants of the UI. A/B testing to the resque! You can test things individually in a variant like colors, UX copy, position, layout, Call to Action messages, buttons and more.
There are a number of different types of A/B testing techniques you can implement. In this example I work with a Gatsby site. It’s a static site, and I can choose from:
Other great tools are Splitbee and Optimizely.
I’ll be working with Traditional A/B Testing in this example. I wanted to use libraries with a small footprint, so these where good options.
npm install react-ab-test mixpanel-browser
import {
Experiment,
Variant,
emitter,
experimentDebugger,
} from "@marvelapp/react-ab-test"
This is just some internal utils I have. You will have to replace mixPanelProjectID with your ID.
import { logGAEvent, mixPanelProjectID } from "utils"
mixpanel.init(mixPanelProjectID);
experimentDebugger.enable()
Debugging tool. Attaches a fixed-position panel to the bottom of the <body>
element that displays mounted experiments and enables the user to change active variants in real-time.
This panel is hidden on production builds.
emitter.defineVariants("navigationCTAExperiment", [
"white",
"magenta",
"primary",
])
Wrap components in <Variant />
inside <Experiment />
. A variant is chosen randomly and saved to local storage.
<Experiment name="navigationCTAExperiment">
<Variant name="white">
<button className="button-primary" onClick={() => handleClick()}>
Start Free Trial
</button>
</Variant>
<Variant name="magenta">
<button className="button-brand-color" onClick={() => handleClick()}>
Start Free Trial
</button>
</Variant>
</Experiment>
emitter.emitWin("navigationCTAExperiment")
Called when a ‘win’ is emitted.
emitter.addWinListener(function(experimentName, variantName) {
console.log(
`Variant ${variantName} of experiment ${experimentName} was clicked`
)
/* Track in Mixpanel */
mixpanel.track(experimentName + " " + variantName, {
name: experimentName,
variant: variantName,
})
/* Do as many things you wish here. */
/* As an example using a little helper to send event to Google Analytics also. */
logGAEvent(
`Experiment ${experimentName}`,
"click",
`Variant ${variantName} was clicked`
)
})
And here is the localStorage item saved.
We recently had the need to serve a JSON file from the website to be consumed by another project. Very simple, but it needed to be possible to create the content from our CMS Contentful and also automatically be served from the website when the site has been built.
Let’s get started!
Not covered in this post. But a standard Content type with relevant fields.
In gatsby-node.js
we add the code to fetch the data from Contentful, and then create the JSON file.
exports.onPostBuild = async ({ graphql, reporter }) => {
The onPostBuild
Node API runs after the build has been completed. It is the last extension point called after all other parts of the build process are complete. So this is what we want to use
To get the data we use a graphql query.
await graphql(`
{
notifications: allNotifications {
edges {
node {
id
title
description
link
}
}
}
}
`).then(result => {
const notifications = result.data.notifications.edges.map(
({ node }) => node
)
const data = []
notifications.map(notification => {
const notificationData = {
id: notification.id,
title: notification.title,
description: notification.description,
link: notification.link
}
data.push(notificationData)
reporter.info(
`Creating a notification: ${notification.title} in notifications.json`
)
}
})
...
Code in the file gatsby-node.js
is run once in the process of building your site. You can use its APIs to create pages dynamically, add data into GraphQL, or respond to events during the build lifecycle.
Read more about Gatsby Node APIs
fs.writeFileSync(`public/notifications.json`, JSON.stringify(data))
...
Note: We need to use the fs
module in gatsby-node.js
to create a JSON file.
Then we save the file in the public
folder.
[
{
id: "ce36f966-1588-5b30-9cb3-58ed43bca553",
title: "Lorem ipsum dolor sit",
description: "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus quis eros elit. ",
link: "https://example.com/cool-page/"
},
{
id: "ee45ca7a-3216-5fe5-853b-1d0918dad0b4",
title: "Lorem ipsum dolor sit",
description: "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus quis eros elit. ",
link: "https://example.com/cool-page/"
}
]
The JSON file will look like this.
We have an automated process with webhooks that will be triggered when we publish or unpublish content in our Contentful space. The webhook triggers an automatic Pull Requst in our CI that builds and publish the site.
Simple, the file will be served from example.com/notifications.json
so just start making requests.
That’s all there is to it!
]]>Open Graph meta tags allow you to control what content shows up when a webpage is shared across major social networks such as Facebook, Twitter, and LinkedIn. Aside from social media, hundreds of other content sharing tools (e.g., messenger tools such as Slack) use these tags. You can control how the title, site name, image, and descriptions are displayed. Think of it as a short intro to your content. It needs to be captivating enough for a viewer to like it or click into it.
You can put a <meta>
tag in the <head>
of a webpage to define this data. It looks like this:
<head>
<!-- Facebook Open Graph Image -->
<meta property="og:image" content="example.jpg"/>
<meta property="og:image:height" content="600"/>
<meta property="og:image:width" content="1200"/>
<!-- Twitter Card Image -->
<meta property="twitter:image" content="example.jpg"/>
<meta name="twitter:card" content="summary_large_image"/>
<meta name="twitter:image:alt" content="Example"/>
</head>
Note: You must use JPG or PNG for Open Graph images. SVG doesn’t work.
I run a small music blog called Jazztips. All content is written in Markdown and then compiled to a static site and deployed to Netlify. The posts are then published on Twitter and Facebook automatically via IFTTT.
This is what the default sharing card looks like, with the record cover as the Open Graph image. It definitely works, but let’s make it better.
It takes a long time and a lot of manual work to create unique images for every post. And different posts would require different imags and different text. Otherwise it wouldn’t stand out very much when it was shared.
The HTML <canvas>
element can be used to draw graphics on a web page. With a Node.js script we can read the frontmatter (artist, title etc) and then generate an image. The example below are a bit simplified.
The Markdown frontmatter looks like this:
title: For Jimmy, Wes and Oliver
artist: Christian McBride Big Band
image: https://res.cloudinary.com/urre/image/upload/w_600,h_600/v1606915632/screenshots/fghrcq1z1j4pk3sgyp2l.png
Below is a shortened version of the script ogimage.js
const fileFrontmatter = fs.readFileSync(`../_posts/my-example-post.md`, 'utf8')
const fileData = fm(fileFrontmatter)
The front-matter package is used here to extract all the YML data.
// Now load image into the canvas and add text
loadImage(fileData.attributes.image).then((image) => {
// Just some settings
const width = 1200
const height = 630
let fontSize = 64
let lineHeight = fontSize * 1.3975
let textArtistY = 120
let textTitleY = textArtistY + 220
// Init the HTML Canvas
const canvas = createCanvas(width, height)
const context = canvas.getContext('2d')
// Fill with a light green background color
context.fillStyle = '#bdfbd5'
context.fillRect(0, 0, canvas.width, canvas.height)
// Add the image to the Canvas
context.drawImage(image, 40, 50, 600, 600)
...
Let’s do a test to see what the output is.
const buffer = canvas.toBuffer('image/jpeg')
fs.writeFileSync('./temp.jpg', buffer)
If we open up this image we can see the result.
Since I use Spectral (designed by Production Type) I’d like to use this font to follow the design of the website.
registerFont('./spectral/Spectral-Light.ttf', {
family: 'Spectral',
})
context.fillStyle = '#fff'
context.fillRect(550, 90, 700, 500)
// Font settings
context.font = `normal ${fontSize}pt Spectral`
context.textAlign = 'left'
context.textBaseline = 'top'
To add the text to the Canvas, the syntax is context.fillText("Our title here", 10, 50);
. However, our text can vary in length and we will need to wrap it nicely to fit our canvas boundaries. I’m using a simple helper method to wrap long titles. Also, controlling the line height is not as easy as you would expect compared to CSS.
const wrapText = (context, text, x, y, maxWidth, lineHeight) => {
var words = text.split(' ')
var line = ''
for (var n = 0; n < words.length; n++) {
var testLine = line + words[n] + ' '
var metrics = context.measureText(testLine)
var testWidth = metrics.width
if (testWidth > maxWidth && n > 0) {
context.fillText(line, x, y)
line = words[n] + ' '
y += lineHeight
} else {
line = testLine
}
}
context.fillText(line, x, y)
}
Now I can add Artist
and Title
next to the record cover:
context.fillStyle = '#000'
wrapText(context,`${fileData.attributes.artist}`,640, textArtistY, 510, lineHeight)
wrapText(context,`”${fileData.attributes.title}”`,640, textTitleY, 510, lineHeight)
Just for some more art direction.
context.fillStyle = '#68d391'
context.beginPath()
context.arc(1080, 100, 50, 0, 2 * Math.PI)
context.fill()
I store and deliver all my media assets using Cloudinary. So lets do the following:
title: For Jimmy, Wes and Oliver
artist: Christian McBride Big Band
image: https://res.cloudinary.com/urre/image/upload/w_600,h_600/v1606915632/screenshots/fghrcq1z1j4pk3sgyp2l.png
ogimage: NEW IMAGE URL HERE
// Load .env file
require('dotenv').config()
// Setup Cloudinary uploader
cloudinary.config({
cloud_name: process.env.CLOUDNAME,
api_key: process.env.APIKEY,
api_secret: process.env.APISECRET,
})
const newImage = cloudinary.v2.uploader.upload(
'./temp.jpg',
function (error, result) {
insertLine(`../_posts/${filename.file}`)
.content(`ogimage: ${result.secure_url}`) // Create YML variable
.at(9) // At line 9 in my case
.then(function (err) {
log(`${chalk.green(`✔️ Inserted ogimage front matter`)}`)
})
}
)
Some npm packages used here are: chalk, insertLine and cloudinary.
If you want to see what your posts will look like on Twitter without actually tweeting it, you can test it using Twitter’s Card Validator. iframely and Facebook Debugger are also great tools I recommend.
There are a lot of possibilities here. You can:
It’s up to you. I currently have a script that checks the latest modified post. If that doesn’t have an OG image, I create one.
Now our sharing image looks way better!
I hope this post will help you make better sharing images. It is a really nice addition to help content stand out from the crowd!
]]>