The LaTeX font is called "Computer Modern". You can include this css file in your css and then set the font family to Computer Modern
. A good @font-face
generator is font-squirrel.
Based on favicon.ico using imagemagick convert, the following script takes as parameter an image file and creates a favicon.
#!/bin/bash if [ `which convert` == "" ]; then echo "You need ImageMagick for this tool." fi convert $1 -resize 256x256 -transparent white favicon-256.png convert favicon-256.png -resize 16x16 favicon-16.png convert favicon-256.png -resize 32x32 favicon-32.png convert favicon-256.png -resize 64x64 favicon-64.png convert favicon-256.png -resize 128x128 favicon-128.png convert favicon-16.png favicon-32.png favicon-64.png favicon-128.png favicon-256.png -colors 256 favicon.ico rm -rf favicon-16.png favicon-32.png favicon-64.png favicon-128.png favicon-256.png
Using the tools pngout
, optipng
and advdef
, you can batch-optimize PNG
images using the following short script:
for i in `find . -name '*.png'`; do pngout $i -c6 -f3 -b128 -kbKGD -v | true optipng -o7 -zm1-9 $i advdef -z -4 $i done
which will traverse all directories in the current path and optimize all the PNG
files it finds.
The page needs a file named jquery.js to be placed where the document resides. The trick is to use Math.random()
to prevent the browser from caching the image.
<html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { setInterval('reload()', 1000); }); function reload() { $('#cam').attr('src', 'cam.jpg?' + Math.random()); } </script> <title>Cams</title> </head> <body> <img src="cam.jpg" id="cam" border="1" /> </body> </html>
The following file is a ready-made HTML
file that can be included where you want Google+, Twitter and Facebook to appear as icons along with the box count. You will have to change the appId
for facebook:
298259421963158
using Facebook's like button generator.
<!-- Social Buttons Start --> <!-- Facebook --> <div id="fb-root"></div> <script type="text/javascript" async defer>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/sdk.js#xfbml=1&appId=298259421963158&version=v2.0"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script> <!-- Google +1 --> <script type="text/javascript" async defer> (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'http://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })(); </script> <!-- Twitter --> <script type="text/javascript" async defer>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> <!-- Reddit --> <script type="text/javascript"> reddit_url = document.URL; reddit_title = document.title; </script> <!-- LinkedIn --> <script src="//platform.linkedin.com/in.js" type="text/javascript" async defer> lang: en_US </script> <!-- Tumblr --> <script type="text/javascript" src="//platform.tumblr.com/v1/share.js"></script> <!-- Pinterest --> <script src="//assets.pinterest.com/js/pinit.js"></script> <!-- Scripts End --> <!-- Horizontal social buttons Start --> <div class="socialbuttons" style="max-width: 700px; margin: 1em auto; padding: 0.5em 0 0.5em 0; background-color: rgba(253, 252, 218, 1); border: 1px solid #c2b59b;"> <ul style="list-style:none; height: 20px; margin: 0 auto; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content;"> <!-- Facebook Like+Send --> <li style="float: left; margin-left: .1em; margin-right: .1em;"> <fb:like colorscheme="light" expr:href="data:post.canonicalUrl" layout="button_count" send="true" show_faces="false"/> </li> <!-- Twitter --> <li style="float: left; width: 75px; margin-left: .1em; margin-right: .1em;"> <a class="twitter-share-button" expr:data-url="data:post.canonicalUrl" expr:data-text="data:post.title" data-count="horizontal" data-lang="en" href="https://twitter.com/share">Tweet</a> </li> <!-- Google +1 --> <li style="float: left; width: 56px; margin-left: .1em; margin-right: .1em;"> <g:plusone annotation="bubble" expr:href="data:post.canonicalUrl" size="medium"/> </li> <!-- Reddit --> <li style="float: left; margin-left: .1em; margin-right: .1em;"> <script type="text/javascript" src="http://www.reddit.com/static/button/button1.js"></script> </li> <!-- LinkedIn --> <li style="float: left; margin-left: .1em; margin-right: .1em;"> <script type="IN/Share" expr:data-url="data:post.canonicalUrl" data-counter="right"></script> </li> <!-- Tumblr --> <li style="float: left; margin-left: .1em; margin-right: .1em;"> <a href="http://www.tumblr.com/share" title="Share on Tumblr" style="display:inline-block; text-indent:-9999px; overflow:hidden; width:129px; height:20px; background:url('//platform.tumblr.com/v1/share_3.png') top left no-repeat transparent;">Share on Tumblr</a> </li> <!-- Pinterest --> <li style="float: left; margin-left: .1em; margin-right: .1em;"> <a data-pin-config="beside" data-pin-do="buttonBookmark" href="//pinterest.com/pin/create/button/"><img src="//assets.pinterest.com/images/PinExt.png" /></a> </li> </ul> </div> <!-- Social Buttons End -->
The code loads all the necessary javascript and aligns the buttons in a row:
as well as sending the URL
of the current page.
For CSS
elements such as padding
and margin
, the parameters follow the box model below:
+--a--+ | | d b | | +--c--+
where properties can be specified in one go like:
p { padding: a b c d; }
which is equivalent to the named properties:
p { padding-top: a; padding-right: b; padding-bottom: c; padding-left: d; }
with a
, b
, c
, d
being the sides of the box.
The following will centre three list elements horizontally on a page.
The HTML
part is the following:
<div class="container"> <ul class="navigation"> <li><a href="#">Home</a></li> <li><a href="#">About</a></li> <li><a href="#">Projects</a></li> </ul> </div>
and the CSS
part uses the CSS3
fit-content
directive:
#container { max-width: 600px; margin: 0 auto; } #navigation { list-style:none; margin: 0 auto; width: -webkit-fit-content; width: -moz-fit-content; width: fit-content; } li { float: left; }
Create a file robots.txt
in the root of your web-server with the following contents:
User-agent: ia_archiver Disallow: /
The history will disappear from archive.org
within 24 hours but in case the lines in robots.txt
are removed, then the history shows up again.
<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Blank HTML5 Template</title> <meta name="description" content="Blank HTML5 Template"> <meta name="author" content="Wizardry and Steamworks"> <link rel="stylesheet" href="css/style.css?v=1.0"> </head> <body> </body> </html>
Using jQuery, the textarea with the id chat
will be autoscrolled whenever the following animate
command is executed:
$("#chat").animate({ scrollTop:$("#chat")[0].scrollHeight - $("#chat").height() },1000);
Hardware is frequently optimised to use textures whose resolutions follow the power of two rule. The power of two rule states that textures should either:
This includes images with resolution values such as: 8
, 16
, 32
, 64
, 128
, 256
, 512
, 1024
, 2048
, etc…
The resulting textures will have 1:1 resolutions such as 64x64
or 256x64
etc…
Even though CSS allows setting the cursor via the cursor
property, even if an animated GIF file is provided as part of the url
parameter, the browser will not play the animation. To work around the issue, standard CSS frames can be used to animate the cursor.
First, the GIF animation frames should be dumped to separate files and then the website CSS should be changed to create an animation out of all the dumped files. For example, the following CSS:
* { cursor: url(images/pumpkin-cursor/pumpkin_frame_00.gif), auto; -webkit-animation: cursor 1000ms infinite; animation: cursor 1000ms infinite; } @-webkit-keyframes cursor { 0% {cursor: url(images/pumpkin-cursor/pumpkin_frame_00.gif), auto;} 8% {cursor: url(images/pumpkin-cursor/pumpkin_frame_01.gif), auto;} 16% {cursor: url(images/pumpkin-cursor/pumpkin_frame_02.gif), auto;} 24% {cursor: url(images/pumpkin-cursor/pumpkin_frame_03.gif), auto;} 32% {cursor: url(images/pumpkin-cursor/pumpkin_frame_04.gif), auto;} 40% {cursor: url(images/pumpkin-cursor/pumpkin_frame_05.gif), auto;} 48% {cursor: url(images/pumpkin-cursor/pumpkin_frame_06.gif), auto;} 56% {cursor: url(images/pumpkin-cursor/pumpkin_frame_07.gif), auto;} 64% {cursor: url(images/pumpkin-cursor/pumpkin_frame_08.gif), auto;} 72% {cursor: url(images/pumpkin-cursor/pumpkin_frame_09.gif), auto;} 80% {cursor: url(images/pumpkin-cursor/pumpkin_frame_10.gif), auto;} 88% {cursor: url(images/pumpkin-cursor/pumpkin_frame_11.gif), auto;} 100% {cursor: url(images/pumpkin-cursor/pumpkin_frame_12.gif), auto;} } @keyframes cursor { 0% {cursor: url(images/pumpkin-cursor/pumpkin_frame_00.gif), auto;} 8% {cursor: url(images/pumpkin-cursor/pumpkin_frame_01.gif), auto;} 16% {cursor: url(images/pumpkin-cursor/pumpkin_frame_02.gif), auto;} 24% {cursor: url(images/pumpkin-cursor/pumpkin_frame_03.gif), auto;} 32% {cursor: url(images/pumpkin-cursor/pumpkin_frame_04.gif), auto;} 40% {cursor: url(images/pumpkin-cursor/pumpkin_frame_05.gif), auto;} 48% {cursor: url(images/pumpkin-cursor/pumpkin_frame_06.gif), auto;} 56% {cursor: url(images/pumpkin-cursor/pumpkin_frame_07.gif), auto;} 64% {cursor: url(images/pumpkin-cursor/pumpkin_frame_08.gif), auto;} 72% {cursor: url(images/pumpkin-cursor/pumpkin_frame_09.gif), auto;} 80% {cursor: url(images/pumpkin-cursor/pumpkin_frame_10.gif), auto;} 88% {cursor: url(images/pumpkin-cursor/pumpkin_frame_11.gif), auto;} 100% {cursor: url(images/pumpkin-cursor/pumpkin_frame_12.gif), auto;} }
was used to animate the glowing pumpkin cursor during October 2023:
Reverse-proxying is required in order to ensure that websites hosted within an internal network are made public via one central computer with a public IP address that redirects web requests to the internal machines depending on the value of the Host
header matching a domain, and in some cases, also the requested sub-path.
Due to public IP addresses, and domain names, being a scarce resource that have to be bought (specifically, leased), a reverse proxy allows many websites to be hosted on the same IP address or on the same IP and domain provided a different sub-path is requested.
All the Servarr PVRs (sonarr, radarr, lidarr, etc.) typically include a configuration setting such that the base path can be set from which content should be served. That allows a configuration with one public IP address, one single domain and multiple sub-paths; for example:
home.machine.tld/sonarr
,home.machine.tld/lidarr
,home.machine.tld/readarr
,The problem is that websites, and in particular served content, such as HTML, for various applications, is created using absolute paths such that the HTML markup includes references starting with the forward-slash, thereby referencing the root of the web application.
For instance, assume that the following content is served as part of the HTML output:
<img src="/img/clown.jpg"/>
This makes the browser send an additional request in order to retrieve /img/clown.jpg
as the img
tag instructs the browser to.
In case the reverse proxy is configured to reverse-proxy via sub-paths, such as:
home.machine.tld/sonarr
,home.machine.tld/lidarr
,home.machine.tld/readarr
,
then a request to: home.machine.tld/img/clown.jpg
will not be redirected to any application and will thus not be found, the reverse-proxy more than likely choosing a default path or just returning a 404 HTTP status code.
In principle, websites and web-applications can be designed to only use relative paths, such that, as an example, the image tag from the previous example would be transformed into:
<img src="img/clown.jpg"/>
where img/clown.jpg
represents a relative path from where the current HTML document is being retrieved from.
By using relative paths, and following the example, the browser will now successfully perform a lookup: home.machine.tld/app/img/clown.jpg
where app
represents a base-path from which the web-application is served. In such cases, the base-path, in this case app
, is typically a configurable parameter from the web application itself. When a base-path is used to set the root from where the content will be served, the same base-path is then used to configure the reverse-proxy such that content meant for different applications is split up and sent to the different internal hosts and ports.
Creating web-applications is up individual users, such that it is not a guarantee that various software packages will contain an option to set the base-path and it many cases, in order to make sub-path reverse-proxying work, it boils down to rewriting the HTML content sent to the browser by the reverse-proxy in order to shift the request path and insert a base path.
As an example, here is a caddy configuration that uses content-rewriting in order to add a base-path to the Bitwarden self-hosted server for the purpose of making sub-path reverse-proxying work:
redir /passwords /passwords/ handle_path /passwords/* { reverse_proxy 127.0.0.1:4255 { header_up Host {host} header_up X-Real-IP {remote} header_up X-Forwarded-Host {hostport} header_up X-Forwarded-For {remote} header_up X-Forwarded-Proto {scheme} header_up Accept-Encoding identity } replace { "/admin" "/passwords/admin" "/vw_static" "/passwords/vw_static" } }
The configuration ensures that a request such as: home.machine.tld/passwords
is forwarded to a machine on the internal network that serves the Bitwarden application. Due to Bitwarden not being written using relative paths and not providing an option to set a base-path, the actual content that is sent by the Bitwarden server, through the reverse-proxy and to the browser must be modified in order to rewrite absolute paths and set a base path.
The configuration stanza:
replace { "/admin" "/passwords/admin" "/vw_static" "/passwords/vw_static" }
will rewrite HTML content that references the "/admin" and "/vw_static" paths by prepending the "passwords" base-path such that follow-up browser requests will still be recognized by the reverse-proxy (ie: home.machine.tld/admin
vs. home.machine.tld/passwords/admin
).
Of course, this is a very hacky and unstable solution to force web-applications to work with sub-path reverse-proxying, especially considering that when the application is updated, it might add some other paths within the HTML output that will require the reverse-proxy configuration to be changed to account for them.
However, sub-path reverse-proxying is overall the cheapest solution, in the sense that if one were to use a dynamic DNS provider, the costs will be zero, such that it is vehemently sought after and you might find dicsussions on project development pages, requesting that the authors implement a base-path and/or rewrite their HTML output code to use relative references and links.
Sub-domain reverse-proxying seems straightforward, but additionally requires the ability to register a domain as well as some level of access to a DNS registry in order to create sub-domains. Now, based on the domain name and sub-domain, the reverse-proxy just matches the sub-domain and knows to which internal machine the request should be redirected to:
sonarr.machine.tld
,radarr.machine.tld
,lidarr.machine.tld
,
Compared to sub-path reverse-proxying, support for the base-path is not required anymore, because absolute paths will still go through the reverse proxy and to their correct destination because the Host
matching is performed on the sub-domains sonarr
, radarr
, lidarr
, etc., instead on the requested path, like machine.tld/sonarr
.
Caddy developers frequently suggest sub-domain reverse-proxying to be used instead of sub-path reverse proxying, but this is not a matter of choice, given that domain name registration is not free and that dynamic DNS services often charge or limit the amount of sub-domains that can be created. For example, the dynamic DNS provider dynu.net allows registering up to 4 domains but also just 4 sub-domains, which greatly limits the applicability of sub-domain reverse-proxying for large scale deployments.
Conversely, caddy developers often cite duckdns.org, as one dynamic DNS provider that offers an unlimited amount of sub-paths per registered domain, for example sonarr.test.duckdns.org
where test.duckdns.org
is the domain registered with the dynamic DNS service and sonarr
is the sub-domain. Apparently, duckdns.org even allows sub-sub-domains, with domains such as test.sonarr.test.duckdns.org
still resolving to the same IP address as test.duckdns.org
. However, one should be aware that sub-domains seem to only incidentally be configured to recursively resolve to the registered domain IP address, with the duckdns.org interface lacking any ability to actually define the sub-domains, such that in the event of duckdns.org restructuring in any way, this seemingly undocumented feature might go away or end up restricted (ie: just like dynu.net monetize sub-domain registrations).
One problem with website uptime checking is that the check should be performed externally in order to be sure that the website is visible outside the network. Given a remote server, checking whether a website is up and running should be fairly trivial by checking access to the port numbers, such that a remote scanner should be suitable.
However, the scan that would be performed by just accessing the ports or performing a curl
request would not really indicate whether the website appearance is also additionally unchanged, for instance, in case the website files were messed with in order to display a broken website instead. A solution would be to use an external service that is capable of taking an image of the website, compare the image with a local snapshot and then judge on the pHash whether the two images are distant enough.
The script below uses "apiflash" to remotely take the image of a website and compares it to an image taken locally within the network:
#!/bin/bash ########################################################################### ## Copyright (C) Wizardry and Steamworks 2024 - License: MIT ## ## Please see: https://opensource.org/license/mit/ for legal details, ## ## rights of fair usage, the disclaimer and warranty conditions. ## ########################################################################### # This script can be used in order to check whether a website is up and # # running by using "flashapi", an online web-service that is capable of # # taking screenshots of a website. # # # # In order to use the script, the URL to an image on the website to be # # checked should be provided, ie: https://grimore.org/_media/was.png and # # the script will perform the necessary round-trip and output a number # # that represents how dissimilar the two images. The larger the number # # the more dissimilar the two images are. # # # # Requirements (should be executable and found in $PATH: # # * Debian: bc uuid-runtime curl # # * Scripts: # # http://www.fmwconcepts.com/imagemagick/downloadcounter.php? # # scriptname=phashcompare&dirname=phashcompare # # http://www.fmwconcepts.com/imagemagick/downloadcounter.php? # # scriptname=phashconvert&dirname=phashconvert # ########################################################################### ########################################################################### ## CONFIGURATION ## ########################################################################### # The apiFlash access key obtained from the https://apiflash.com website. API_FLASH_ACCESS_KEY="" ########################################################################### ## INTERNALS ## ########################################################################### if [ -z "$1" ]; then echo "Syntax: $0 URL-To-Image" exit 1 fi IMAGE_SOURCE="$1" NONCE=$(uuidgen | sed 's/\-//g') # hash image via apiflash A=$(curl -s "https://api.apiflash.com/v1/urltoimage?access_key=$API_FLASH_ACCESS_KEY&url=$IMAGE_SOURCE#$NONCE" -o - | \ convert - -resize x128 jpeg:- | \ phashconvert) # hash image direct B=$(curl -s "$IMAGE_SOURCE#$NONCE" -o - | \ convert - -resize x128 jpeg:- | \ phashconvert) #D will contain the difference of both hashes D=$(phashcompare $A $B) printf "$D\n"
where:
API_FLASH_ACCESS_KEY
should be set to the API key on apiflashAn example invocation would be:
/usr/local/bin/website-image-roundtrip https://grimore.org/_media/was.png
where:
https://grimore.org/_media/was.png
is the path to an image hosted on the website to be checkedThe script normalizes both images and then uses Fred's ImageMagick scripts, namely:
to first generate a perceptual pHash of both images using the phashconvert
script and compare the two images together using phashcompare
. The output will be a magic number, that represents the distance between the two images perceptually; in other words, the larger the number the more different the two images are. Ideally, the number will then be piped into a different script that will take action when the perceptual difference between the two images is too large (ie: attempt to restart the webserver, or send a notification).
Although this idea is implemented above and provided as a Bash script (and a fairly simple one), the idea still remains and could be implemented using any website outside the network local to the website that provides a screenshot of an URL. The result is that the check is way stronger than just retrieving a known-string via the open-ports due to the overall aspect being considered. Similarly, it should even be possible to slice up the website and compare areas or regions in order to determine whether the website is up and running.