Daft Google Docs Fail - Insert Date?

posted 13 Dec 2016, 04:12 by Andrew at Lycom   [ updated 13 Dec 2016, 04:15 ]

Generally I find Google Docs (part of the bizarrely rebranded G suite) fulfils 99% of my needs. But sometime there are stare-at-the-screen-and-work-out-where-something-is moments.  Like trying to insert the current date into a document (for a legal agreement, where I wanted to track drafts printed).  You can't do it in Google Docs! There is no default mechanism for doing this.

Luckily I found this post:

and this code seemed to do the trick:

——————-COPY BELOW THIS LINE—————————

function onOpen() {

// Add a menu with some items, some separators, and a sub-menu.


.addItem('Insert Date', 'insertAtCursor')




* Inserts the date at the current cursor location in boldface.


function insertAtCursor() {

var cursor = DocumentApp.getActiveDocument().getCursor();

if (cursor) {

// Attempt to insert text at the cursor position. If insertion returns null,

// then the cursor's containing element doesn't allow text insertions.

var date = Utilities.formatDate(new Date(), "GMT", "dd-MM-yyyy"); // "yyyy-MM-dd'T'HH:mm:ss'Z'"

var element = cursor.insertText(date);

if (element) {


} else {

DocumentApp.getUi().alert('Cannot insert text at this cursor location.');


} else {

DocumentApp.getUi().alert('Cannot find a cursor in the document.');



——————-COPY ABOVE THIS LINE—————————

Now, why would you not include a Date() function as standard? Think on ...

Cpanel - Couple of Post Install Config Hacks

posted 20 Jun 2016, 12:59 by Andrew at Lycom   [ updated 20 Jun 2016, 13:00 ]

I have done quite a lot of work with Cpanel for a couple of clients recently.  These are a couple of 'extra' config steps that I found useful.

Change the default cpanel error page

Simplicity itself -  you know the defaultwebpage.cgi ( or /cgi-sys/defaultwebpage.cgi ) landing page for errors.

In WHM, go to https://documentation.cpanel.net/display/1144Docs/Web+Template+Editor and you can customise / personalise the message text to reflect your needs.

Add custom directories to the Cpanel system backup

Installed some custom software? Got some application dirs (e.g. /opt/nginx) you want to back up that don't sit in the default places that the Cpanel system backup will back up?

So, go to "WHM >> Backup Configuration" and ensure "Backup System Files" is enabled

Create a file in the "/var/cpanel/backups/extras/" directory. e.g:
touch /var/cpanel/backups/extras/opt

Edit this file and add a line for the "/opt/nginx" directory:

# cat /var/cpanel/backups/extras/opt
Check (or force) your next backup, then inspect the catalogue and confirm this dir is included in the backup.  Job done. (No need for any wacky cron copy scripts to do the same thing.)

ESET Endpoint Antivirus - Not Updating?

posted 22 May 2016, 07:48 by Andrew at Lycom   [ updated 22 May 2016, 07:49 ]

I've worked my way through lots of Antivirus products.  Over time I've come to prefer the simpler, less resource hungry ones.  And reliability is the key - you need something that is robust and will just work quietly in the background. I came to love the Network Associates Viruscan products which did just that.  In recent times I have been a big fan of ESET Antivirus - I have it on most of the servers and desktops I look after.  I love the way it doesn't need my time! I check occasionally to make sure it's still there, but by and large it just keeps doing the protecting business.

So, I had a rare opportunity to do some troubleshooting with it recently when one of my servers emailed me reporting a problem downloading updates.

It's a quick fix 

in the server console, using the 'clear update cache' button solved it and I downloaded the AV update manually.

Actually, I need to do so little with it I'm often a bit rusty on the finer points of the ESET interface, but I think that's a sign of success! Especially when I think of the time spent in the past fiddling with the Norton Antivirus configuration widgets on various desktops fixing various things ...

An Easy Way to Strip Out EXIF / Location From Your Photos

posted 9 May 2016, 02:44 by Andrew at Lycom   [ updated 9 May 2016, 02:45 ]

One of the really useful things about taking photos on a smartphone is that digital file formats can also attach extra information to the image (so-called EXIF) data.  This can be really useful to show where you took the photograph.  But it can also be undesirable, and possibly dangerous if you publish the images online.  For a spectacular example: the legendary (and probably crazy) John McAfee was located (allegedly) in the Jungles of Guatemala after visiting reporters forgot to remove the GPS data from a twitter post.

I've previously used various software to strip out this metadata from some of my photos - a tedious process.  Anyway, there's a simple way to do it now if you use Goooooogle Photos.  

  1. Go to your https://photos.google.com/settings page

  2. Enable the option that says 'Remove geo-location in items shared by link
    Affects items shared by link but not by other means
Then, when you want to sanitise some photos from your collection you can select them and use the 'share with a link' - people can either use the link directly or you can download them yourself for whatever final destination you have in mind.  You'll find that on these 'new' files most of the extra EXIF data (including GPS) has been removed - the originals remain unaffected.

Handy, eh?

Getting excessive LFD Excessive resource usage / Suspicious Process Messages?

posted 9 Mar 2016, 13:16 by Andrew at Lycom   [ updated 9 Mar 2016, 13:17 ]

I've been doing a project setting up a Cpanel dedicated Linux CentOS server.

Part of the process involved getting the environment ready for hosting, and fine-tuning the various security / alerting options prior to it going live. One thing that I came across was a couple of excessive LFD alert emails every 30 mins or so:

e.g. lfd on xxx.xxx.co.uk: Excessive resource usage: xxxxx (2305 (Parent PID:2305))
lfd on xxx.xxx.co.uk: Suspicious process running under user xxxxx

pretty annoying and when I checked out the source I found it wasn't anything to worry about, it just offended the defaults set up in the LFD daemon. What's that?

Short for Login Failure Daemon, LFD is a process that is part of the ConfigServer Security & Firewall (CSF) that periodically checks for potential threats to a server. LFD looks for such attacks as brute-force login attempts and if found blocks the IP address attempting to attack that server.

It's part of ConfigServer, a "Stateful Packet Inspection (SPI) firewall, Login/Intrusion Detection and Security application for Linux servers" bundled with my server build / cpanel.  Useful, but like Peter, too may emails crying "wolf" get ignored and you end up missing a real incident.

So, I logged in with SSH as root, found the CSF configuration file (/etc/csf/csf.conf) and edited a couple of options to fit my setup.

Then I found the /etc/csf/csf.pignore file and edited it to exclude the executable that was generating the spurious results:


Then I restarted csf and lfd when done:

csf -r
service lfd restart

Checked my emails for a few hours, and they had settled down - I still got various alerts (e.g. telling me I had logged on via SSH) but not so many that I didn't look at them any more.

Snowballs and Cloudberry ...

posted 1 Mar 2016, 16:03 by Andrew at Lycom   [ updated 1 Mar 2016, 16:04 ]

I'm a great fan of Cloudberry Backup. It does just about everything you could want from a backup and data sync solution.

But one of the drawbacks of any cloud-centric backup regime is the_sheer_amount_of_time_it_takes_for_that_first_backup_to_run. My normal technique is to set it off, but with a schedule that throttles the bandwidth usage right back during the day so it doesn't annoy the folks in offices too much - they'd soon notice if their web pages took forever to load.   Even with a fast DSL / fibre link it can take a week or so to complete, thereafter only the changed 'deltas' will be uploaded.

So I was really intrigued to see the Cloudberry team post about using the new Amazon Snowball device to physically shift data (in encrypted form, naturally) to the AWS datacenters.

Amazon Snowball

Looks like the perfect solution to get round that 'first backup' problem.  

You've got to have a LOT of data to backup in order to make it worthwhile though! 50TB anyone?

Wildcard DNS - Good or Bad?

posted 29 Feb 2016, 02:47 by Andrew at Lycom   [ updated 1 Mar 2016, 15:14 ]

I'm in the process of trying to move a client's sites from their current hosting provider's VPS to a dedicated server.

There's usually quite a discovery process involved in these jobs, as typically most SME businesses are very light on documenting things! Information about logins, which interfaces are used to manage services etc becomes quite sketchy the more you enquire.

Anyway, one thing I missed about their main domain is that the agency that has been managing it has setup 'wildcard DNS' for  their domain.  I've not come across this much before, except for sub-domains and specialist implementations of online blogging platforms.  I'm used to a traditional BIND / Microsoft DNS server setup where you definitely want to define your DNS records very carefully, with possibly some scope for allowing dynamic DNS records as part of the DNS/DHCP internal 'split horizon' DNS setup.

Is it bad? In my opinion, YES.  Mainly, it's LAZY.  You don't have to bother about knowing your fully qualified domain names (FQDN) - everything just gets pointed at the the same IP address.  This means that if you do come to move to a different setup (like now), you have no idea which hostnames are actually used. OK, you can look at host headers / aliases on the web server, unless some genius has also allowed the web site to accept all requests: 


Hmmm. Not good.

In the past I've had a couple of sites running on dedicated IP addresses which had all sorts of 'unknown' international domains pointed at them which promptly broke when we migrated servers (it was a large multinational company with lots of country TLD's). There are lots of other technical and security reasons why it's considered bad practice. My main concern is that it breaks error handling, and whilst it is done for web hosting convenience these records will affect other protocols as well.

I'm happy to accept that there are circumstances where it is useful, but I think those circumstances can be better addressed by the careful use of subdomains or domain variations:


Oh, and someone every organisation should keep a track of what domains and hostnames you actually use and why!

Restoring a Microsoft Azure 'bacpac' file to SQL Server 2008 R2

posted 23 Feb 2016, 16:50 by Andrew at Lycom   [ updated 23 Feb 2016, 16:51 ]

Today I got a job to transfer an IIS + SQL site from an Azure hosted environment to a 'real' server (Windows 2008 R2).  All went well until I came to 'restore' the SQL backup file into the new SQL DB set up on the server (running SQL Server 2008 R2). Fool, did you think that would work?

bacpac - uh-oh!

Ah. Not good. A bit of googling lead me to this helpful post:


So I installed the SQL 2014 Management Tools (NOT the full version, JUST the management tools). Another helpful link to the individual downloads:


Choose the 'New Sql Server 2014 Stand-alone install' option (and NOT the UPGRADE option). It _will_ just_ install the console tools, and not affect your current SQL Server 2008 Setup.

Then use the 'import data-tier application' method described in the above link.

I found I had to create a NEW database, that gave me an error:

Could not load schema model from package. (Microsoft.SqlServer.Dac)

Internal Error. The database platform service with type Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider is not valid. You must make sure the service is loaded, or you must provide the full type name of a valid database platform service. (Microsoft.Data.Tools.Schema.Sql)

So I installed both the x86 and 64 versions of Microsoft® SQL Server® Data-Tier Application Framework (February 2015)

Re-tried the import process above and was able to restore the bacpac backup into my SQL Server 2008 R2 setup as a new DB.

Not exactly a straightforward process, but it does kind of work.  Hopefully I won't get too many of these to do in future.

Tightening SSL Security on Windows Server 2008 R2

posted 30 Jan 2016, 16:32 by Andrew at Lycom   [ updated 5 Feb 2016, 03:47 ]

I had a request from a client to tighten up SSL security settings on their server after a 'security analysis' from the hosting company. These can be useful in flagging up things that need doing; they can also be a spurious waste of time, particularly if just run against a generic template which tries to cover everything from domain controllers to application servers - blindly applying recommendations from those tends to just break stuff.

Anyway, the SSL exercise was worth doing - particularly as we had the go ahead that the client wasn't worried about cutting off visitors with older operating systems and browsers that don't support newer SSL protocols out of the box. 

SO, this is the process I followed:

Firstly, go to the excellent SSL Labs test page and run a report on one of your SSL websites. This will give you a baseline to start from.

Next, on the server

Enable TLS 1.2 following these instructions.  

Disable SSL2 and SSL3 like this.

You can also cheat and apply these registry settings - take a registry backup first ;-) 

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56/56]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 40/128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC2 56/128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 64/128]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\PCT 1.0\Server]


[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]


[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Client]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 2.0\Server]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Server]

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL 3.0\Client]

Then reboot, and run the SSL Labs test again. Analyse the results: you should see an improvement. Don't worry about achieving the top rating - unless you have a particular requirement to be bleeding edge it will take a fair bit of work and may affect other things.

To take it further, you can install the IIS Crypto freeware, and for an example of the issues involved (mainly the compromises trying to achieve perfect scores on one assessment scheme without breaking things - like MS SQL), look here and here.

Did I mention about taking those registry backups? ;-)

Late Night Plesk 11.5 to 12.5 Upgrade. And the Morning After...

posted 17 Jan 2016, 11:39 by Andrew at Lycom   [ updated 17 Jan 2016, 11:40 ]

Most sysadmins hate upgrades. I mean, they really hate them. Things break. Stuff that should just keep on working doesn't. But you can't not do it - life is about security updates now; products move on and have new requirements.

I had to look at a Plesk 11.5 setup that has been working pretty well. But the PHP versions were stuck in the dark ages, and that was starting to impact on some projects.   So, I reviewed the Parallels doc on choosing upgrade strategy:

and decided to go ahead with an upgrade to Plesk 12.5.  I read the bit about checking where your db is

mysql. OK, good.

Then used the guide:

and also

I kicked off the upgrade from being RDP'd into the server one night. All seemed to go well until I realised it wasn't doing much, eventually I spotted the 'pop-up' window that hadn't 'popped-up':

The following services require a system restart:
 Component-Based Servicing pending restart
 Restart the system before continuing installation.

Rebooted and restarted the upgrade

Plesk pre-upgrade checking...

WARNING: After upgrade you have to add write pemissions to psacln group on folder C:\inetpub\mailroot\pickup. Please, check 

http://kb.odin.com/en/111194 for more details.

And then it carried on with the upgrade, and on, and ... until I got worried and searched some more:

This is the useful bit:

Open autoinstaller.log in C:\ParallelsInstaller and scroll down to the end. Last line will show you the action that is currently running.

  1. SI: Action 18:22:52: Applying security. Applying security for D:\Plesk\
  2. If there a lot of files in Plesk directory. For example, customer may put vhosts directory under Plesk installation directory or there is a lot of mails saved on the server. In this case Applysecurity.exe will check all files in Plesk installation directory. If there are a lot of domains and files in vhosts folder, it may takes a lot of time for completing the task.
And they are not kidding -  applying security permissions took for ever. This was quite a well used hosting server with a lot of files - don't underestimate just how long this will take.

Anyway, eventually it completed and all seemed well - Plesk interface loaded fine, IIS sites were up and a random selection of websites could be browsed. Job done. Go to bed.

The Next Morning.

Never good when you get txts while eating your breakfast. What isn't in the Plesk documentation is any warning of the number of things that might go awry during the upgrade. Here's my list from today's experience:

  1. Various IIS sites start asking for authentication (Authentication Required error 401). Nice.  Turns out there's a Plesk tool that fixes this


    But more specifically:
    Run the following command:
    "%plesk_cli%\repair.exe" --reconfigure-web-site -web-site-name example.com
    Then perform the following command to re-create NTFS permissions for additional FTP-accounts (if any):
    "%plesk_cli%\repair.exe" --reconfigure-ftp-site -web-site-name example.com

  2. PHP Hell.

    Remember that time the upgrade spent applying permissions? Some of those may well break your installs. Check things like permissions on the PHP sessions dir (e.g. c:\windows\temp) for the psacln group. Verify the ACLS on the httpdocs dir for problem sites. Getting lots of 'Internal Server Error 500' messages? Probably 'FASTCGI_UNEXPECTED_EXIT' errors:


    HTTP Error 500.0 - Internal Server Error

    C:\Program Files (x86)\Parallels\Plesk\Additional\PleskPHP5\php-cgi.exe - The FastCGI process exited unexpectedly

    IIS will return a generic Internal Server Error 500 if it doesn't have any more specific error from the application, in this case PHP/FastCGI.

    To provide IIS with all necessary php error information and NOT display the generic Internal Server Error, do the following to your php.ini file:

    log_errors = off

    To ensure PHP error messages do make it to the web page do this:

    display_errors = on

    The level of PHP error reporting can be controlled by (more about this at http://php.net/error-reporting ):

    error_reporting = E_ALL & ~E_NOTICEAnd this should give you clues on where to go next.  I found several sites which had deprecated PHP code elements - moving up to a supported PHP version (now I had lots of versions to choose from in Plesk 12.5) helped to resolve most of those.

  3. File Manager zip extracting fails.


    Find that 7-zip directory and give psacln rights to it.

And several more knobbly problems to keep you occupied.  Web.config file reporting a duplicate MIME type? That sort of thing.

Ah, well. All sorted eventually ... until the next upgrade!

1-10 of 42