Check Out Linux Mint

Linux Mint Operating SystemI have to share something with everyone now. Anybody and everybody that knows me, knows that I am a die hard UNIX and Linux fan. I made the majority of my career managing UNIX and Linux boxes, with a server to admin ratio of sometimes 100 to 1. Everyone also knows that I am a die hard Debian fan, my distro of choice for my servers and my desktops is Debian, hands down. However, that doesn’t mean that I don’t use or like other distributions, I mean every one has it’s place and purpose. I really dig Ubuntu and SuSE and I cut my teeth on Red Hat and CentOS just as an example.

That being said, the purpose of this post is to tell you about another distro that I just recently checked out called Linux Mint. I know a lot of people have found it since it is in the number one slot over at distrowatch. On the recommendation of my friend Steve, I tried it out and I have to tell you that I was absolutely blown away by it. It’s based on Ubuntu which itself based on Debian so right there is a plus in my book, it has a solid core and foundation, but that’s not what blew me away.

Continue reading

Quickly backup files with this bash script

<img class="alignright size-full wp-image-1256" style="margin: 4px; border: 2px solid black;" alt="Bash script" src="http://www best slimming pills.solarum.com/wp-content/uploads/2013/12/script_icon-sm.jpg” width=”150″ height=”103″ />This is something that I use on a regular basis on all of my servers. How many times have you been ready to edit a file and either don’t make a backup copy or make one but by now are real tired of typing out copy one file to another name with a date stamp and blah blah blah. It’s not hard to do, but it gets old quick typing the same thing over and over again, plus you might not always name them the same thing or the same way, so now your backup files have different naming patterns and whatnot.

Don’t worry, I have an easy solution. I created a simple script to backup the file specified and append a time and date stamp to the end of it. I symlink this to the command ‘bu’ in someplace like /usr/bin so it’s always in the path of whatever user I might be (myself, root, backup, whoever?), and then POW, it’s easy to backup files plus they are always named the same way – you just type “bu filename”. Now, if you don’t like the way I name my file copies, feel free to customize this to suit your needs. Also, I currently have the script make the copy right next to the original file, but it would be easy to always copy the files to a backup directory somewhere if you wanted, the possibilities are endless!

OK, on to the script goodness:

#!/bin/bash
 
if [ "$1" == "" ]; then
  echo "No input given, stopping"
  exit
fi
 
YEAR=`date | awk '{print $6}'`
MONTH=`date | awk '{print $2}'`
DAY=`date | awk '{print $3}'`
TIME=`date | awk '{print $4}' | awk -F: '{print $1"-"$2"-"$3}'`
 
echo -n "Backing up the file named $1 ... "
/bin/cp -p $1 $1_${YEAR}.${MONTH}.${DAY}_${TIME} > /tmp/bu_run.log 2>&1
echo "done."

There you have it, a simple file backup script it bash that can save you time and many, many keystrokes. Drop me a comment and let me know what you think, or if you have any suggestions or improvements.

Cool Tools: Hard Disk Sentinel

hds_std_230Recently I found a tool for Windows and Linux PCs that provides a long overdue service, and that is hard disk monitoring. Now, I don’t mean monitoring like some tools where it just looks at free space or temperature and that’s it, this tool set looks at everything you can think of. Yes it checks free space and temperature, but also reads, monitors and reports on S.M.A.R.T. data, errors, log information, performance and more. Check out a full list of features here.

One thing that this tool does that I really like is hard disk surface testing and data relocation. I don’t mean simple “can I read this” sector testing like most tools do, HDS does intensive surface testing making sure that the entire disk can be read from and written to properly. It can detect weak sectors, that would be sectors that still work and thus would pass a simple “read only” test, but are weak meaning they are not “good as new” and could be near failure. When it finds these areas, it can then relocate that data to known good areas and attempt to re-initialize those weak sectors. If that fails, they can be marked bad and not used, making sure that no data is placed anywhere that might be at risk of data loss. You can also setup rules to backup data to another place when these weak areas are found, this tool is highly customizable. There is another tool that does this kind of testing amazingly well, and it’s called Spinrite, from Gibson Research. The major disadvantage of using Spinrite though, is that you must book from a disk and run Spinrite from a DOS console. This allows exclusive and total access to the disks, and means that the testing is even more thorough, but your system is not usable while the tests are running. The trade-off of getting to have these tests performed while my system is up and running means a lot to me.

Unfortunately, this is not a free tool, although they do have a free trial you can use to test it out, and the pricing is very reasonable. All in all, I highly recommend this tool for any users PC. Now, aside from installing directly onto your PC, the license allows this tool to be installed onto a memory stick or thumb drive instead, allowing the tests to be run on many computers. This is when it is a golden tool for a PC technician’s kit. So, check it out and see what you think. I am very happy with it, and I hope you get some benefit from it too. Don’t forget to check out the rest of the Cool Tools over in the Cool Tools section!

*Note: Please remember that this is not any kind of paid advertisement or review. I am posting this because of exactly what I said in the article, I found this tool and found it to be useful and wanted to share it with my readers. I just want to make sure that you know that I in no way am getting paid for this article, nor do I get paid if you buy the software, etc. This is a 100% honest review from a happy user!

Enhanced by Zemanta

A little history for all us starnix guys (and gals) out there

<a href="http://www.solarum weight reduction pills.com/wp-content/uploads/2012/08/ken-and-den-1024.jpg” target=”_blank”>Ken Thompson (seated) and Dennis RitchieIf you spend any amount of time working with or administering UNIX and/or linux »”>Linux servers, especially unix »”>UNIX, you should be familiar with the text editor ‘vi’ and some commands like ‘sed’ and ‘awk’. If you have been around a while, or had the good(?) fortune of working on some old(er) systems, you might even remember the line editor ‘ed’. I’ll show my age here and recall fond memories of using ‘ed’ to write code many years back.

OK, on to the point, I was looking through Wikipedia for something entirely un-related, but ran across a tidbit of information that I thought was really cool, and that I knew I had to share with Solarum’s readers. It gives a bit of history about some of the tools that we use and love today.

From Wikipedia:

“ed is a line editor for the Unix operating system. It was one of the first end-user programs hosted on the system and has been standard in Unix-based systems ever since. ed was originally written in PDP-11/20 assembler by Ken Thompson in 1971. Ken Thompson was very familiar with an earlier editor known as qed from University of California at Berkeley, Ken Thompson’s alma mater; he reimplemented qed on the CTSS and Multics systems, so it is natural that he carried many features of qed forward into ed. Ken Thompson’s versions of qed were the first to implement regular expressions, an idea that had previously been formalized in a mathematical paper, which Ken Thompson had read. The implementation of regular expressions in ed is considerably less general than the implementation in qed.

ed went on to influence ex, which in turn spawned vi. The non-interactive Unix command grep was inspired by a common special use of qed and later ed, where the command g/re/p means globally search for the regular expression re and print the lines containing it. The Unix stream editor, sed implemented many of the scripting features of qed that were not supported by ed on Unix; sed, in turn, influenced the design of the programming language AWK, which in turn inspired aspects of PERL »”>Perl.”

It’s pretty cool how stuff flows and comes together. Who knew or would have thought that a couple simple commands or programs would turn into what we have today.

*Note: starnix refers to the combination of UNIX, Linux and any other ix/ux OS that we work with.

Navicat SSH Tunnel Error – 2013 Lost connection to MySQL server

This post is for anyone out there running any Navicat database tools.  The company, PremiumSoft, that makes the line of Navicat tools is probably best known for there incredible database administration tool, Navicat.  That’s where I first found them.  They make a database admin tool that can connect to MySQL, MS SQL Server, Oracle, SQLite and everything in between.  Aside from being able to connect to just about anything that stores data, once connected you can do so many cool things with your databases in the name of database administration, that it would take me a week to create a post for it all.  Besides, this post isn’t a commercial for Navicat, but I did have to share just how good this product is.  Believe me, it is amazing, and now they have this really wicked data modelling tool that works hand in hand with the database admin tool.  You need to see it to believe it.  Check out their site [link], they have very good demos and lots of information about the products.

My apologies, I digress, the main purpose of my post was to inform any people already using Navicat or any of the other PremiumSoft products about a problem I ran into and a way to fix it.  I am using the software with MySQL databases primarily, but I believe the principle of the fix will apply to any database and server out there, especially Linux.

Now, one of the really cool things about the database admin and data modeling tools is that they can connect to your database via a SSH (Secure Shell Port 22) tunnel, instead of the normal default and usually plain text method.  For example, by default, when you connect to a MySQL server, the username and password you give to the server is sent in plain text, so anyone can read it.  Any command you type on that database console is also sent in plain text, so anyone can read it.  Think about the new user you just created for your new web hosting customer. What if their database username and password fell into the wrong hands.  It might be bad, it might not, it might be localized just to that one customer/user which would be bad enough, but suppose they found an exploit and got root on your server.  Now they have all of your data.  Even if you don’t have any data that is secret, just the hassle alone, not to mention explaining all of this to your customer(s) make this a really bad day.

This isn’t usually a big concern if you are running the database on the same server as the web server (which is common practice in many hosting scenarios), and if your database tools are on the server like the MySQL command line tools and such.  But what if you want to connect to the database from say, your PC?  Like you would do if using a database admin tool like Navicat.  You certainly don’t want all of the data that you will be sending back and forth to be in plain text, right?  Well, now you don’t have to leave it in plain text!  You can setup the connection in Navicat to connect to the Secure Shell server, which means you have an encrypted connection and not plain text.  Then, you can use the SSH tunnel that was created to connect to the database server itself.  What this means is that you use the SSH server to redirect your communications to the database server locally, so no one can see it.  Just like you were sitting at the server itself.

I’ll run through it again real quick, see if this makes sense.  The connection between your PC and the server running database is now encrypted and secure from prying eyes because instead of connecting to the database server directly, you are connecting to the Secure Shell server.  It is now the Secure Shell server that takes your communication and hands it off to the database server internally, so it’s safe from anyone watching outside.  It’s really cool, and just another reason I love the Navicat product so much.  Not to mention Linux as well!

The problem that I found was this, when I created the link to the SSH server in order to talk to the MySQL server, it wouldn’t connect.  I would get the connection to the SSH server, but when it then tried to talk to the database server, the database server kicked it out like no connection could be made.  I tried connecting locally from the Linux console think that maybe I killed some MySQL process that listens for connections, but it was working fine.  I tried it again and again but it just didn’t work.  The error I was getting from Navicat was this:

2013 – Lost connection to MySQL server at ‘reading initial communication packet’, system error: 0

I did some digging and found a basic setting to check.  This didn’t fix the problem, but I thought I would share it here since it has to be set in order for the tunnel to work:

  1. In the sshd config file (/etc/ssh/sshd.config) make sure that AllowTcpForwarding is enabled, because the default is disabled in most cases.

What I finally found to be causing the problem, was TCP_WRAPPERS.  Naturally, in my hosts.allow file I had the IP address of my PC in there, so that I could connect to the server.  So at first this seemed odd that this was my problem.  However, when you think about it, it makes sense.  The connection that is coming to the MySQL server originates not from my PC, but from the SSH server itself.  That’s right, because my connection stops at the SSH server, and then the SSH server sends the data to the database server.  This is a simplified view of things, but it should work to illustrate what’s going on.  Therefore, the simple fix was to add mysqld: localhost or 127.0.0.1 to the hosts.allow file in order to allow the traffic to go through TCP_WRAPPERS and to the MySQL server.  I read more about this once I worked it out, and I saw some “technicians” offering the solution of adding mysqld: ALL to their hosts.allow file.  Egads! I said!  Technically that would work, but damn, don’t open it up to allow everyone into your databases!!!  Just add localhost or 127.0.0.1 and you will be fine, and you will keep out the other riff raff.  I hope this helps some of you out there, enjoy!

Learn Solaris UNIX or Linux Today, The Real Way

OK Folks, I added this post so I could tell you about a new article that I just posted.  Just like the title here suggests, I talk about learning Solaris UNIX and/or Linux the real way or maybe it would be better said as the right way.  I don’t suppose there is a wrong or right way, but still.  In this article I reach down into the depth of not only my own knowledge but I lean heavily on my own experience as well, and use that information to share all I know and can rant about learning solid UNIX and Linux skills for the up and coming nix jockies out there.  I really hope that this article can help someone and maybe more than just one naturally.  It’s some (I think) good advice on how to get started and some of the best ways to dig in there and learn some good stuff.  Maybe in the future I’ll post more lower level hard core how to stuff and see how that goes over.  Some of that kind of stuff I have posted already (like SVM disk info and Symantec Storage Foundations (formerly Veritas Volume Manager)) has been real popular.  Anyway, for anyone interested, head on over to the Library and check out the new article on Learning Solaris UNIX and Linux today, you might find something helpful in there!  If you have any thoughts about it or something you think I should add, please drop me a line and let me know.  Thanks!