Linux Scripting, Part IV- Scripting for Longevity
We’ve learned a lot about commands, utilities and how to create a script, but we need to discuss the importance of scripting for longevity.
What is Scripting for Longevity? We have a tendency to focus on scripting to automate something WE might not want to perform manually, but avoid what we think might void our value. We may try to ensure there is necessity for our role or our knowledge as we create scripts. This can be built into the execution process, scheduling, arguments, pre-run or post-run steps. This doesn’t make us an asset, but a liability and against what I call, “the Code of Conduct” when automating.
Questions
The questions you have to ask yourself as you script out solutions are:
- Am I automating what users need automated or just what I find tedious?
- Am I building out scripts that have hardcoded values in them vs. dynamically provided, ensuring ongoing support from me?
- Will I need to update this script in 5 weeks, 6 months, 1 year?
I’m going to admit, that the reason I didn’t embrace Powershell at first, was most of the examples I found were of full of hardcoded values. I found it incredibly obtuse, but I started to realize that it came from many sources who might not have the scripting history that those of other shells, (this was just my theory, not a lot of evidence to prove on this one, so keep that in mind…) As Powershell scripts have matured, I’ve noticed how many are starting to build them with more dynamic values and advance scripting options, and with this, I’ve become more comfortable with Powershell.
I think the best way to learn is to see real examples, so let’s demonstrate.
Environment
When you set environment variables as part of the script user’s login, this can eliminate extra coding. Added to the .bashrc or a .profile will ensure that these values are already known to the script and are available to it. Instead of entering paths to directories or setting up the environment, it can be part of the environment as soon as the user logs in. Example of .profile- in our example, we’ll call the file .profile_sql, as we can create a couple different ones, then maintain those couple instead of tons of variables in the individual scripts:
export SQL_HOME=/opt/app/mssql export SQLDB=finance1 export myip=$(curl http://ifconfig.me) export subscriptionID=$(az account show | grep id |awk '{print $2}'| tr -d \"\,)
When writing a script, you can then use this to set the values from the profile as part of your script and eliminating the need to set this in the script. It also allows you to have environment profile to maintain. In the above example, we’ve set the following:
- The installation home for SQL Server
- The name of the default database that will be used when logging in with SQLCMD
- The IP Address for the user’s workstation
- The Azure subscription ID for the User
The Script
In our shell script, we’ll begin by setting our shell, then executing the .profile_sql, setting everything we just discussed:
#/bin/bash . ~/.profile_sql
The profile can have everything needed for this script and any others required for the support of other scripts, allowing for recycling, even if you only need a few for a unique script, this profile can support everything or you can break it up by database or application environment, creating a unique profile for each.
After this, we can then set any unique variables for the script that can’t be or shouldn’t be set at the profile level and begin scripting. Notice that we can use the variables from the profile inside other variables:
set -euo pipefail IFS=$'\n\t' export DTA_DIR=$SQL_HOME/data/ export startip=$myip export endip=$myip export servername=$holname export vnet=$servername"_vnet"l export snet=$vnet"_snet"
The DTA_DIR value is partially hardcoded, so if we moved the data files, we’d need to update this script. Think about how we could query the database to gather these values in the profile instead, (an enhancement). There are often options to pull values dynamically, which will remove the need for ongoing support, allowing for scripts to automatically update as changes are made behind the scenes.
As an example, I capture my IP Address, then set this value in my script after calling the .profile_sql as part of the script, updating the IP Address beforehand. You can set values, built up from other values- Note the vnet is a combination of the server name with a naming extension and the subnet, (snet) is just the vnet name with an extension on top of this. The user will need an identifying name- if they create one, why would we not want to reuse it and append it to simply locking into the one?
Recycle, Generate, Reuse
This can also be done by using utilities/commands like awk/grep/sed to capture values or queries or commands to pull data that will populate values. If the data comes from a command and populates to the screen in a way that isn’t easily manipulated, you can output the data to a file that’s easier to work with or can be captured multiple ways as you work through the steps in your script.
Here’s two examples:
az vm image list --offer Oracle --all --publisher Oracle --output table >db.lst
The above Azure command takes the list of images for Oracle offerings, ouptus it in a table formate to a file named db.lst. Every time I rerun it, it rewrites over the file, recycling the file and ensuring only the latest information.
I can then use this data to fulfill different scripts in different ways.
For my Oracle VM creation, I can capture the fourth column in the table output and print it for options using the AWK utility:
cat db.lst | awk '{print $4}'
You can also do this dynamically to capture values that you want to return to the screen for the customer to choose from so they always have the latest available:
az account list-locations | grep name | awk '{print $2}'| tr -d \"\, | grep us | grep -v australia
The above Azure command lists all the location names, (grep name) printing the second word when it contains “us”, but not when it contains, “Australia”, (which also contains “us”. I’ve also asked it to trim out the special characters from the output, (tr -d).
Notice again, each of the commands are separated by a “|”, allowing me to connect different commands and utilities.
Building this type of features into scripts create longevity and sustainability that hard coding and manual intervention can’t. Always ask yourself:
- Is there a way to dynamically generate this value?
- Have I entered anything that will require me to revisit this script sooner than later?
- Are there utilities that can replace what I’m trying to script out manually?
- Should I be placing this automation in the specific script or have a wrapper script or profile handle the variable setting?
These are all questions you should be asking yourself as you write a script or if you’re enhancing one. Scripting is not a one-shot deal. You should be working to enhance and built out scripts to create more value for you, the user and the business ongoing.
Pingback: Keeping Bash Scripts Reusable – Curated SQL