Where Is Microsoft Excel Used?

[ad_1]

Whether you work at an accounting firm, a marketing company, an auto dealership, a school attendance office, a manufacturing plant’s human resources department, or an office associated with city, county, state or federal government, chances are, you’ll be called upon to use and learn Excel.

Just about every workplace has a demand for Excel, the computing world’s most commonly used software program for comparative data analysis. Excel has been available in various incarnations for more than a decade. Each subsequent release takes the program to new territory.

Popularly known as the best spreadsheet program on the market, Excel is powerful, easy to use, and remarkably efficient. Excel is highly interactive. Its spreadsheet cells are arranged in a collection of rows and columns, each of which can hold a number, a text string, or a formula that performs a function, such as calculation. It’s easy to copy and move cells as well as modify formulas. The spreadsheet is displayed on the computer screen in a scrollable window that allows the document to be as deep or as wide as required.

Working for a major newspaper in Northern California, I was one of several reporters involved in the annual evaluation of our county’s economy. The job involved collecting data that would be punched into Excel spreadsheets that ultimately ranked information according to the category of statistics being reviewed.

The beauty of Excel, from the standpoint of newspaper research projects, is that you can use formulas to recalculate results by changing any of the cells they use. With this model, you can use the same spreadsheet data to achieve various results by simply defining and changing formulas as desired. It is this feature that makes Excel so useful in so many different arenas.

With a click of the mouse, we reporters were able to get answers to a wide variety of questions. Which employers had the greatest number of workers? Which ones had the highest amount of gross annual receipts? Which ones appeared to be growing and which ones had declining sales? What was the volume of real estate loans and had there been a decline or increase from the previous year?

We looked at local and national retail, services, financial institutions, government entities, agriculture, the wine industry, tourism and hospitality, manufacturing, residential and commercial real estate, everything imaginable.

Excel allowed us to examine ratios, percentages, and anything else we wanted to scrutinize. Finally, we were able to use Excel to compare the results to data from previous years.

Since reporters tend to be former English majors, most of those who worked on this annual project were more familiar with Microsoft Word than any other software program. Therefore, most were required to undergo Excel training. For some, learning Excel was easier than for others. A few relied on guides such as Microsoft Excel Bible. Some reporters underwent an Excel tutorial while others learned by doing.

Not only were the Excel spreadsheets crucial to the research, the format of each was published in the newspaper. Here is where some additional Excel functions came into play. Editors were able to make the spreadsheets more visually appealing by using colors and shading, borders and lines, and other features that made the spreadsheets easy for readers to decipher.

Wearing another of my several hats in the newsroom, I often wrote articles concerning the local job market. I found proficiency in Excel was a requirement for a wide variety of employment positions and that area recruiting firms offered their clients opportunities to take free or low-cost Excel tutorials in preparation for the workplace. Most employers expect job candidates to already know the software that the work will require and don’t want to have to train new hires.

Don’t kid yourself. If you’re seeking any kind of office work, you’ll need to know not only Microsoft Word but also Excel.

Excel and Microsoft are trademarks of Microsoft Corporation, registered in the U.S. and other countries.

[ad_2]

Source by Sheri Graves

Logging for the PCI DSS – How to Gather Server and Firewall Audit Trails for PCI DSS Requirement 10

[ad_1]

PCI DSS Requirement 10 calls for a full audit trail of all activity for all devices and users, and specifically requires all event and audit logs to be gathered centrally and securely backed up. The thinking here is twofold.

Firstly, as a pro-active security measure, the PCI DSS requires all logs to be reviewed on a daily basis (yes – you did read that correctly – Review ALL logs DAILY – we shall return to this potentially overwhelming burden later…) requires the Security Team to become more intimate with the daily ‘business as usual’ workings of the network. This way, when a genuine security threat arises, it will be more easily detected through unusual events and activity patterns.

The second driver for logging all activity is to give a ‘black box’ recorded audit trail so that if a cyber crime is committed, a forensic analysis of the activity surrounding the security incident can be conducted. At best, the perpetrator and the extent of their wrongdoing can be identified and remediated. At worst – lessons can be learned from the attack so that processes and/or technological security defenses can be improved. Of course, if you are a PCI Merchant reading this, then your main driver is that this is a mandatory PCI DSS requirement – so we should get moving!

Which Devices are within scope of PCI Requirement 10? Same answer as to which devices are within scope of the PCI DSS as a whole – anything involved with handling or with access to card data is within scope and we there for need to capture an audit trail from each of them. The most critical devices are the firewall, servers with settlement or transaction files and any Domain Controller for the PCI Estate, although all ‘in scope’ devices must be covered without exception.

How do we get Event Logs from ‘in scope’ PCI devices?

We’ll take them in turn –

How do I get PCI Event Logs from Firewalls? the exact command set varies between manufacturers and firewall versions but you will need to enable ‘logging’ via either the Firewall Web interface or the Command Line. Taking a typical example – a Cisco ASA – the CLI command sequence is as follows logging on no logging console no logging monitor logging a.b.c.d (where a.b.c.d is the address of your syslog server) logging trap informational This will make sure all ‘Informational’ level and above messages are forwarded to the syslog server and guarantee all logon and log off events are captured.

How do I get PCI Audit Trails from Windows Servers and EPoS/Tills? – There are a few more steps required for Windows Servers and PCs/EPoS devices. First of all it is necessary to make sure that logon and logoff events, privilege use, policy change and, depending on your application and how card data is handled, object access. Use the Local Security Policy You may also wish to enable System Event logging if you want to use your SIEM system to help troubleshoot and pre-empt system problems e.g. a failing disk can be preempted before complete failure by spotting disk errors. Typically we will need Success and Failure to be logged for each Event –

  • Account Logon Events- Success and Failure
  • Account Management Events- Success and Failure
  • Directory Service Access Events- Failure
  • Logon Events- Success and Failure
  • Object Access Events- Success and Failure
  • Policy Change Events- Success and Failure
  • Privilege Use Events- Failure
  • Process Tracking- No Auditing
  • System Events- Success and Failure

* Directory Service Access Events available on a Domain Controller only

 

** Object Access – Used in conjunction with Folder and File Auditing. Auditing Failures reveals attempted access to forbidden secure objects which may be an attempted security breach. Auditing Success is used to give an Audit Trail of all access to secured date, such as, card data in a settlement/transaction file/folder.

*** Process Tracking – not recommended as this will generate a large number of events. Better to use a specialized whitelisting/blacklisting technology l

**** System Events – Not required for PCI DSS compliance but often used to provided extra ‘added value’ from a PCI DSS initiative, providing early warning signs of problems with hardware and so pre-empt system failures. Once events are being audited, they then need to be relayed back to your central syslog server. A Windows Syslog agent program will automatically bind into the Windows Event logs and send all events via syslog. The added benefit of an agent like this is that events can be formatted into standard syslog severity and facility codes and also pre-filtered. It is vital that events are forwarded to the secure syslog server in real-time to ensure they are backed up before there is any opportunity to clear the local server event log.

Unix/Linux Servers– Enable logging using the syslogd daemon which is a standard part of all UNIX and Linux Operating Systems such as Red Hat Enterprise Linux, CentOS and Ubuntu. Edit the /etc/syslog.conf file and enter details of the syslog server.

For example, append the following line to the /etc/syslog.conf file

*.* @(a.b.c.d)

Or if using Solaris or other System 5-type UNIX

*.debug @a.b.c.d

*.info @ a.b.c.d

*.notice @ a.b.c.d

*.warning @ a.b.c.d

*.err @ a.b.c.d

*.crit @ a.b.c.d

*.alert @ a.b.c.d

*.emerg @ a.b.c.d

Where a.b.c.d is the IP address of the targeted syslog server.

If you need to collect logs from a third-party application eg Oracle, then you may need to use specialized Unix Syslog agent which allows third-party log files to be relayed via syslog.

Other Network Devices Routers and Switches within the scope of PCI DSS will also need to be configured to send events via syslog. As was detailed for firewalls earlier, syslog is an almost universally supported function for all network devices and appliances. However, in the rare case that syslog is not supported, SNMP traps can be used provided the syslog server being used can receive and interpret SNMP traps.

PCI DSS Requirement 10.6 “Review logs for all system components at least daily” We have now covered how to get the right logs from all devices within scope of the PCI DSS but this is often the simpler part of handling Requirement 10. The aspect of Requirement 10 which often concerns PCI Merchants the most is the extra workload they expect by now being responsible for analyzing and understanding a potentially huge volume of logs. There is often a ‘out of sight, out of mind’ philosophy, or a ‘if we can’t see the logs, then we can’t be responsible for reviewing them’ mindset, since if logs are made visible and placed on the screen in front of the Merchant, there is no longer any excuse for ignoring them.

Tellingly, although the PCI DSS avoids being prescriptive about how to deliver against the 12 requirements, Requirement 10 specifically details “Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6”. In practice it would be an extremely manpower-intensive task to review all event logs in even a small-scale environment and an automated means of analyzing logs is essential.

However, when implemented correctly,this will become so much more than simply a tool to help you cope with the inconvenient burden of the PCI DSS. An intelligent Security Information and Event Management system will be hugely beneficial to all troubleshooting and problem investigation tasks. Such a system will allow potential problems to be identified and fixed before they affect business operations. From a security standpoint, by enabling you to become ‘intimate’ with the normal workings of your systems, you are then well-placed to spot truly unusual and potentially significant security incidents.

For more information go to http://www.newnettechnologies.com

All material is copyright New Net Technologies Ltd.

[ad_2]

Source by Mark Kedgley

Automatically Capturing, Saving and Publishing Serial RS232 Data to the Web

[ad_1]

This article will describe setting up a system that will utilize a lightweight command line (CLI) install of Linux (Ubuntu in this example) to capture RS232 (serial) information, capture it and upload it to an ftp server.

This is useful for applications including data collection from sensors or other serial devices such as amateur packet radio – and will utilize the data on a web site or other remote machine.

The process will involve starting the program ‘screen’ and ‘minicom’ when the machine starts up. The ‘screen’ output will be captured to text file. This text file will be sent up to an ftp server using ‘lftp’, scheduled via a ‘cron’ job.

This article will not cover installing a command line Linux such as Ubuntu minimal – it is assumed you have a functional system. When the machine starts up, it should start a detached ‘screen’ session, starting minicom (terminal program) within – logging the screen output to file.

To install screen and minicom, use: sudo apt-get install screen minicom

Verify your minicom settings

Run: sudo minicom -r, and set the default port and clear out the init strings since the startup script runs as root. Ctrl-A o, set to ttyS0, 1200 (or the appropriate other baud rate). Note that I also had to edit sudo nano /etc/minicom/minirc.dfl to get it working properly — even though I did sudo minicom -r — contents of my file:

# Machine-generated file – use “minicom -s” to change parameters.

pu baudrate 1200

pu bits 8

pu parity N

pu stopbits 1

pu mhangup

pr port /dev/ttyS0

Get screen to auto-start at boot and run minicom:

In /etc/init.d, create a startup script: sudo nano /etc/init.d/startHAM

contents:

#!/bin/sh

/usr/bin/screen -d -m -L /usr/bin/minicom

modify the file so it will run at startup:

sudo chmod +x /etc/init.d/startHAM

sudo update-rc.d startHAM defaults 80

I could not get logging to work by passing “-L”, so edit the file: sudo nano /etc/screenrc

add the lines:

log on

It is recommended your reboot after editing these files. After you reboot you can see if it is working by typing sudo screen -r to resume the screen session – you should be in minicom and should be able to see/type to your TNC (or device).

If that works, you can check the log file – mine is saved to the root. nano /screenlog.0

You can tail it to watch in real time anything written to the log: tail -f /screenlog.0

You may want to sudo screen -r, press return / mheard, etc. (for a packet modem) to generate some text, then ctrl-a d (detach) and then nano /screenlog.0 and go to the end – sometimes the control characters may hide output, but it is really there.

Now you have screen and minicom starting at startup and all the screen output is being save to the screenlog.0 file.

Uploading the screenlog.0 file to your web site ftp server:

The program lftp can be used to synchronize files to an ftp server (your web host). To install lftp, type: sudo apt-get install lftp

Create an lftp script file “SyncHAM.txt” in your home directory: sudo nano SyncHAM.txt

Contents:

open ftp: //username:password@mywebhostaddress.com

set ftp:ssl-allow no

mput -O /public_html/myhostbasepath /screenlog.0

-O is the base path on the web host, after that is the path to the file to be sent (screenlog).

note the ssl-allow no is optional if you host does not support ssl, the -R means reverse – send the file UP to ftp.

To test it, type sudo lftp SyncHAM.txt

Setup Cron Job

edit the file: sudo nano /etc/cron.daily/hamsync

contents:

#!/bin/sh

lftp -f /home/cmate/SyncHAM.txt

echo “done with SyncHAM.txt”

make it executable:

sudo chmod +x hamsync

Be sure to reboot the machine. You should now have a functional system to capture the serial data and upload it daily to the ftp server.

[ad_2]

Source by Scott Szretter

How To Repair The Acer D2D Recovery

[ad_1]

This tutorial can also help to do it on other computer brands

Disclaimer : First of all you must be aware that some of the operations to come can cause irreversible change on your hard disk, I recommend to you and I will never make it enough make a backup of your system before launching you in any hazardous operation. Any damage and/or modification done on your system will be under your whole responsibility. The following procedures were done on a Acer Aspire 5102wlmi and some also functioned on a Dell Inspiron 9400/1705.

As you know, the Acer computers and those of other manufacturers are now delivered with a system of restoration installed in a hidden partition of your hard disk. This system launches out while pressing keys ALT+F10 simultaneously. And sometimes for various reasons this system ceases functioning.

The first cause often comes owing to the fact that function D2D Recovery is disable in the bios (menu principal).

The solution : enable the function and try to press keys ALT+F10 during the starting of the computer.

The second cause : in this case hidden partition PQSERVICE was erased or damaged, or you replaced the disc and in this case it is not present.

The solution : If you did not previously make a backup of your system by making an image disc. It will not be possible to use the D2D recovery. Your only hope will be to have in your possession Acer Recovery CD/DVD.

And the third causes : the Acer Master Boot Record (MBR)was damaged or replaced by non-Acer MBR. As long as partition PQSERVICE is present or that you can put the hand on the necessary Acer files you can reinstall the MBR of Acer.

The solution :

First method : on a functional Windows system:

1 disable the D2D recovery option of the bios.

2 open a Windows session with an account administrator.

3 Download, unzip and launch partedit32(registration required for download).

4 Identify the Pqservice partition by its size (at the bottom of the partedit window there is a partition information box) it is a small sized partition approximately 2 to 6 Go.Once made change the type of your partition into 0C and save. Restart and open a session with an administrator account, you should be able to navigate to the PQservice partition. Seek these two files mbrwrdos.exe and rtmbr.bin once localised open a command prompt and launch this command mbrwrdos.exe install rtmbr.bin, this will install the Acer MBR. Close the command prompt window restart again Windows go into the bios and reactivate the D2D recovery. Now ALT+F10 should launch Acer recovery at the starting of the computer.

Second method : On a nonfunctional Windows system.

For that you must use a Linux distribution (for me Mandriva provided all the tools necessary).

1 Boot on the Mandriva install CD/DVD the boot menu will give you the possibility to repair or to restore the Windows boot loader.

2 If that is not enough launch an installation of linux(this will be an occasion to test this terrible OS) and choose LILO as boot loader(a boot menu that allow you to choose between several operating systems) once finished the installation. Restart your computer in the boot loader menu system you will have at least 2 Windows options the first generally point to PQservice choose it and you will boot directly into acer D2D recovery.

The last solution is the simplest one, just note that during the Linux installation you will have to resize your Windows partition to create a place for a new Linux partition it is the most perilous part because irreversible, therefore take your precautions at this time.

[ad_2]

Source by Alan Bradock

Easy Steps to Stop SMTP AUTH Relay Attack and Identify Compromised Email Account for Postfix

[ad_1]

Today lots of the email application such as Sendmail, Postfix, or even MS Exchange has been re-designed to reduce the possibility of become an ‘spam-relay’. From our experience, most of the SMTP AUTH relay attack is caused by the compromised of the weakly password protected user accounts. Once the accounts discovered and been compromised. Spammer authenticate using the user credentials, they are granted to relay via the server, which is then used to send spam.

Below are the easy steps to stop these spam emails quickly and identify which account(s) has been compromised.

Step1: Stop on on-hold mail queue.

Large amount of spam emails keep queueing your mail spool. What even worst is all the spam it fill up all your /var. Thus, it is always to hold the mail queue for temporary until you find out the which account has been exploited by spammer and send a large amount of emails.

Step2: Check your mail log.

Go to /var/log/maillog to have a quick look on the line with from:<>. You might see lots of email domain name there are not belong yo your organization. This is due to the spammer is faking the mail from:<>.

Step 3: Identify compromised account authenticating SMTP AUTH connection

Next, let us check those email accounts that has been exploited. Run a have cat grep sasl_username and sort it. You should see a long list of the login attempt and session for those exploited account. You can also do a quick calculation by running wc -l command to see total sessions for a particular user.

Step4: Disable the exploited email account.

Once, we have SASL_username string, which is the user account. You are advised to disabled or change the password to complex password.

Step 5: Move the mail queue or delete the spam email

Now, we have to deal with our mail queue. Easier and fastest way is to move your mail queue and do the housekeeping later. Or, you can delete those spam email using Bash script.

Step 6: Release Mail queue

Remember to release mail queue after our housekeeping process and keep on monitoring of the mail traffic.

[ad_2]

Source by James Edward Lee

How To Quickly Make A Bootable USB Stick With FreeBSD

[ad_1]

Install FreeBSD, or use an existing FreeBSD installation, and follow these steps:

1) First, you need to prepare and format your USB stick:

fdisk -BI /dev/da0
bsdlabel -B -w da0s1
newfs -U -O1 /dev/da0s1a
boot0cfg -v -B da0
(“-U -O1” [“O” like in Olympus, not zero] is for UFS1 which provides much faster copying than UFS2; if you decide for UFS2, type “-U -O2” – but expect that the copying will be slower)

2) Then mount it: mount /dev/da0s1a /usb
3) Copy all directories (FreeBSD) to the stick
4) After copying, modify the /usb/boot/loader.conf (explained below)
5) In the /boot directory on your USB stick you must have MFS (Memory File System – mfsroot.gz), which you will make (instructions are below)
6) Modify your /etc/fstab in MFS and put the following line (only) there:
/dev/md0 / ufs rw 0 0
7) After you boot your computer with the stick, you will be in the MFS environment from which you will mount your USB stick with mount_nullfs (described below)

Modification of /boot/loader.conf on your USB stick

You must have the following lines in your /boot/loader.conf (some lines are optional):

mfsroot_load=”YES”
mfsroot_type=”mfs_root”
mfsroot_name=”/boot/mfsroot”
nullfs_load=”YES”
splash_bmp_load=”YES”
vesa_load=”YES”
geom_uzip_load=”YES”
geom_label_load=”YES”
bitmap_load=”YES”
bitmap_name=”/boot/splash.bmp”
snd_driver_load=”YES”
kern.maxfiles=”25000″
kern.maxusers=”64″
vfs.root.mountfrom=”/dev/md0″

# Additional filesystem drivers

udf_load=”YES”
linux_load=”YES”
fuse_load=”YES”
ntfs_load=”YES”
ext2fs_load=”YES”
reiserfs_load=”YES”

Making your own MFS

FreeBSD, after the kernel boots, can use the root file system in memory (mfsroot_load=”YES” command in /boot/loader.conf will do the trick). To build such a memory file system, type the command:
dd if = /dev/zero of = mfsroot bs = 1024k count = 42

The mfsroot file of about 40 MB in size will be created. You need to format it, mount it and copy the most important files into it from your FreeBSD system (/bin, /sbin, /etc, /root….):

mdconfig -a -f mfsroot md0
newfs /dev/md0
mount /dev/md0 /mnt

Once copied, you must unmount it and gzip it: gzip mfsroot

Optionally, you can chroot it to see if everything works, then copy the mfsroot.gz to /usb/boot onto your USB flash drive (or disk). If you think it may be a problem to pick the most important files for your MFS (from your FreeBSD installation), search for mfsbsd in Google and either use its toolset or the MFS image alone (contained in the downloadable ISO of mfsbsd).

After booting from the USB stick (you will jump into MFS), you must mount the physical USB stick:

/sbin/mount -o ro /dev/da0s1a /usb
/sbin/mount_nullfs /usb/boot /boot
/sbin/mount_nullfs /usb/usr /usr

The above commands will help you use the big /usr directory on your USB stick instead of the /usr dir in MFS. mount_nullfs /usb/boot /boot is optional, as in your MFS/boot directory only the following files are needed for the little MFS to boot (/boot/kernel directory in MFS): geom_label.ko, geom_uzip.ko, zlib.ko a their debug symbols (zlib.ko.symbols, etc.). By mounting the /usb/boot dir via mount_nullfs into the /boot directory in your MFS you will be able to load kernel modules.

[ad_2]

Source by Juraj Sipos

Technical Writing – Components Of Windows User Interface In Software Documentation (2) – Window

[ad_1]

User interface documentation, one of the important tasks in software documentation, requires clear and consistent definition of all interface components. In this second part of the series, we continue with our survey of the most important interface components that a technical writer should be familiar with.

NOTE: Windows, Mac and Linux machines all have different user interfaces, depending on the particular Operating System (OS) (or “distribution” in the case of Linux) installed on your machine. This series is limited to the Windows interface only.

First of all, let’s clarify the conceptual difference between a SCREEN and a WINDOW.

A SCREEN, as defined by Microsoft, is the “graphic portion of a visual output device.” It is sometimes used interchangeably with a “MONITOR” or a “DISPLAY.” Sometimes both are used together as in the retronym “monitor screen.”

A WINDOW, on the other hand, refers to the individual display area surrounded by a FRAME and display when the user clicks a button or selects certain menu options.

A “screen” displays one or more “window”(s) but not the other way around.

A “screen” has one size which is the size of the monitor. Every “window,” on the other hand, might have a different size depending on the user preference.

A “window” is a more abstract term when compared to a “screen” and that’s why although there is a “screen RESOLUTION” (number of pixels in a unit length of screen), there is no corresponding “window resolution.” There is, for example, “screen SAVER” programs. But there are no similar “window savers.” You can save and close a window but a screen, as the physical medium of the interface, is there always, no matter which window(s) it is displaying.

When there are multiple windows open in a screen, the window that is selected and responds to user commands is referred to as the window in FOCUS. By “focusing” on a window you select it and make it respond to your interaction.

[ad_2]

Source by Ugur Akinci

How to Fix Pci Sys Missing or Corrupt Error on Windows 7 or Vista

[ad_1]

The pci.sys missing or corrupt error occurs when the file is corrupt or missing from the computer. How do you fix this? There are perhaps two most frequently used methods to resolve this issue. One is by using the recovery console option. The other is by performing the in-place upgrade function to restore the missing or corrupt file.

The recovery console is the method that is performed by many. Usually, a simple act of rebooting the system would do the trick. But this may not solve the issue permanently. You have to install windows operating system and when you go the set up page, repair the installation by pressing R. the shortcut key is to repair the installation process. After this step, you have to open the recovery console screen by pressing C.

After typing the right number of the Windows operating system you want to install, you have to type the local administrator password if it is called for. Then, enter the suitable commands on the console prompt, such as map so that you can obtain the driver letter. After the commands are entered, you can exit the screen and restart Windows normally. This time, you would not get the screen that says that the PCI file is corrupt or missing.

Another simple way to fix this error is to run command prompt. Make sure that you open the file as the system administrator as only the admin can do major changes and limited users cannot. On the screen type, “sfc /scannow” and hit the Enter key. A scan opens and at the end of the scan the error would mostly be fixed. Restart the computer to make sure you do not get the error message again. These are some of the most commonly used methods to fix this error.

[ad_2]

Source by Celly Kayser

Nagios Log Monitoring – Monitor Log Files in Unix Effectively

[ad_1]

Nagios Log File Monitoring: Monitoring log files using Nagios can be just as difficult as it is with any other monitoring application. However, with Nagios, once you have a log monitoring script or tool that can monitor a specific log file the way you want it monitored, Nagios can be relied upon to handle the rest. This type of versatility is what makes Nagios one of the most popular and user friendly monitoring application that there is out there. It can be used to effectively monitor anything. Personally, I love it. It has no equal!

My name is Jacob Bowman and I work as a Nagios Monitoring specialist. I’ve come to realize, given the number of requests I receive at my job to monitor log files, that log file monitoring is a big deal. IT departments have the ongoing need to monitor their UNIX log files in order to ensure that application or system issues can be caught in time. When issues are known about, unplanned outages can be avoided altogether.

But the common question often asked by many is, what monitoring application is available that can effectively monitor a log file? The plain answer to this question is NONE! The log monitoring applications that does exist require way too much configuration, which in effect renders them not worthy of consideration.

Log monitoring should allow for pluggable arguments on the command line (instead of in separate config files) and should be very easy for the average UNIX user to understand and use. Most log monitoring tools are not like this. They are often complex and require time to get familiar with (through reading endless pages of installation setups). In my opinion, this is unnecessary trouble that can and should be avoided.

Again, I strongly believe, in order to be efficient, one must be able to run a program directly from the command line without needing to go elsewhere to edit config files.

So the best solution, in most cases, is to either write a log monitoring tool for your particular needs or download a log monitoring program that has already been written for your type of UNIX environment.

Once you have that log monitoring tool, you can give it to Nagios to run at any time, and Nagios will schedule it to be kicked off at regular intervals. If after running it at the set intervals, Nagios finds the issues/patterns/strings that you tell it to watch for, it will alert and send out notifications to whoever you want them sent to.

But then you wonder, what type of log monitoring tool should you write or download for your environment?

The log monitoring program that you should obtain to monitor your production log files must be as simple as the below but must still remain powerfully versatile:

Example: logrobot /var/log/messages 60 ‘error’ ‘panic’ 5 10 -foundn

Output: 2—1380—352—ATWF—(Mar/1)-(16:15)—(Mar/1)-(17:15:00)

Explanation:

The “-foundn” option searches the /var/log/messages for the strings “error” and “panic”. Once it finds it, it’ll either abort with an 0(for OK), 1(for WARNING) or 2(for CRITICAL). Each time you run that command, it’ll provide a one line statistic report similar to that in the above Output. The fields are delimited by the “—“.

1st field is 2 = which means, this is critical.

2nd field is 1380 = number of seconds since the strings you specified last occurred in the log.

3rd field is 352 = there were 352 occurrences of the string “error” and “panic” found in the log within the last 60 minutes.

4th field is ATWF = Don’t worry about this for now. Irrelevant.

5th and 6th field means = The log file was searched from (Mar/1)-(16:15) to (Mar/1)-(17:15:00). And from the data gathered from that timeframe, 352 occurrences of “error” and “panic” were found.

If you would actually like to see all 352 occurrences, you can run the below command and pass the “-show” option to the logrobot tool. This will output to the screen all matching lines in the log that contain the strings you specified and that were written to the log within the last 60 minutes.

Example: logrobot /var/log/messages 60 ‘error’ ‘panic’ 5 10 -show

The “-show” command will output to the screen all the lines it finds in the log file that contains the “error” and “panic” strings within the past 60 minute time frame you specified. Of course, you can always change the parameters to fit your particular needs.

With this Nagios Log Monitoring tool (logrobot), you can perform the magic that the big name famous monitoring applications cannot come close to performing.

Once you write or download a log monitoring script or tool like the one above, you can have Nagios or CRON run it on a regular basis which will in turn enable you to keep a bird’s eye view on all the logged activities of your important servers.

Do you have to use Nagios to run it on a regular basis? Absolutely not. You can use whatever you want.

[ad_2]

Source by Jonathan Rayson

Sun Solaris 10 – How to Setup a SAMP Server + VSFTP + Phpmyadmin (Solaris Apache2 Mysql5 Php5)

[ad_1]

INTRODUCTION

This tutorial assumes you have some basic knowledge of how to use Unix and / or Linux and you have already installed and setup your Sun Solaris server. If you have not, please check my other tutorials on setting up a Sun Solaris server and come back to this tutorial. I will be right here waiting for you!

Okay let's get started, and as always we are assuming you have installed and have pkg-get working with blastwave set as your mirror site.

MYSQL

Let's take a look at the available packages to install first.

# Pkg-get -a | grep mysql

This should output a good list of packages. I am going to install mysql 5 for this tutorial.

# Pkg-get install mysql5

This should install several packages that mysql depends on. So let this run for a while, it might take a while depending on your internet connection. Go ahead and select "y" to all questions.

It should finish successfully and say something like this:

Installation of was successful.
bash-3.00 # ## Executing postinstall script.
bash-3.00 # Configuring service in SMF
MySQL 5 is using Service Management Facility. The FMRI is:
svc: / network / cswmysql5: default
No database directory found in the default location.
If you need to build the initial database directory,
see / opt / csw / mysql5 / share / mysql / quick_start-csw
If you are using a non-default database directory location, please start mysql manually.
Now, is not that nice. It even went ahead and created our SMF file for us so that we can use Solaris 10's SMF system. But you will notice that it could not locate the database file. So that's what we are going to do next. We are going to use blastwave's configuration script. Run it at the location stated above:

bash-3.00 # / opt / csw / mysql5 / share / mysql / quick_start-csw

Then you should get some output to your terminal that looks like this:
This is the blastwave quick start script to setup a MySQL5 database directory.

The base directory is / opt / csw / mysql5.
The default database directory is / opt / csw / mysql5 / var.

If you have not setup a partition for the database and you want one; now is a good time to exit this script and create and mount the partition.

If you have not setup a my.cnf file and you do not want one of the sample files; now is a good time to exit and create the file /opt/csw/mysql5/my.cnf.
Data directory: The default is / opt / csw / mysql5 / var

Follow the onscreen directions. You can go ahead with default for everything, but you might want to select one of the sample my.cnf files. The default one uses the my-small.cnf which is for a small memory footprint server. You might want to go with one of the default configs for a server that has more memory.

You should get a success response and then a message stating how to run mysql server. You are almost there! Now just type the following at your command prompt:

# Svcadm enable cswmysql5

Then check to see if everything is working fine.

# Svcs | grep mysql
You should get an answer like this:
online 0:22:05 svc: / network / cswmysql5: default

If you get another state like offline or maintenance, this means you have a problem and you will need to check your mysql log files or the SMF log files to see why it's not starting.

Let's try to connect to our mysql server. Now, if your path is currently something like this:
/ Usr / sbin: / usr / bin: / opt / csw / bin /

You wont be able to just call mysql from the command line. I would recommend adding a symbolic link to the mysql executeable.

# Ln -s / opt / csw / mysql5 / bin / mysql / opt / csw / bin / mysql

Now let's open mysql server by typing mysql.
# Mysql

It should log us right in. Type exit to exit out of mysql server. Let's now set a root password for our mysql server. Do so like this:
# / Opt / csw / mysql5 / bin / mysqladmin -u root password 'new-password'

Now let's check this login:
Try to just type mysql at the command prompt and see what happens. You should get a denied for user root.

try again like this:
# Mysql -u root -p
Then when it asks for your password give it the one you set in the above command.

You should be now logged in. Now you are complete. You have a password protected mysql server and it is now running and fully functional.
ADDITIONAL MYSQL SERVER SETTINGS

Let's say we want to create a mysql user account to use for our websites. Let's create this user now.

Login to mysql as root and run these commands
use mysql;
grant all privileges on *. * to ausername @ localhost identified by 'theuserspassword' with grant option;

APACHE

Let's take a look at the packages available.

# Pkg-get -a | grep apache

For this tutorial we will be installing apache2

# Pkg-get install apache2

Let this run for a while and install all needed software. It might take a while. Just enter Yes to most of the questions.

Since we used pkg-get to install apache2 it should be pretty much just ready to go.
Let's first create a folder to host our web files. Since Sun Solaris likes to put a majority of your disk space during install in the / export / partition, I will create a www folder in the / export folder.
# Mkdir / export / www /
Now let's edit the config file which is located here:
bash-3.00 # emacs /opt/csw/apache2/etc/httpd.conf

Change the variables you want to set. I pretty much just set the ServerName and ServerAdmin variables and changed the document root to a different place then the default. Search for the keywords to locate the portion of the config file to change.
DocumentRoot "/ opt / csw / apache2 / share / htdocs"
I changed this to DocumentRoot "/ export / www"
And you have to change the Directory listing as well
# This should be changed to whatever you set DocumentRoot to
#

change to

Let's edit the type of files we will serve with our web server. Search for DirectoryIndex. It should look like this:

DirectoryIndex index.html

let's add some other pages to serve.

DirectoryIndex index.html index.php index.htm

You can add your virtual hosts to the end of this file as you like as well. More on this later.
Now let's restart the apache2 server.
let's check to make sure it's loaded
# Svcs | grep apache
If the response you get is the following then it's already running, make sure it's the cswapache2 service that is running.
online 18:03:03 svc: / network / http: cswapache2

If it's not enabled and running you should issue the following command and check again:
# Svcadm enable -rs cswapache2
Since we made changes to the httpd.conf file we should issue a restart command.

# Svcadm restart cswapache2

We should be ready to go. Since the directory has no files in it yet, if you go to your browser and type the IP address of your Solaris server you should get a response with something like this:
Index of /

Congratulations! You have now gotten your apache server running.
PHP

Now let's install php5.

# Pkg-get -i php5
# Pkg-get -i ap2_modphp5 mod_php5 php5_curl php5_gd php5_mcrypt php5_mysql php5_mysqli phpmyadmin

Let this install and make sure to hit "Y" to continue with the installation.

Let's now configure our configuration file.

# Emacs /opt/csw/php5/lib/php.ini

You will need to uncomment the following line (just remove the semicolon):
; Extension = php_mysql.dll

Change the following three lines to match the below lines in the php.ini file
max_execution_time = 6000; Maximum execution time of each script, in seconds
max_input_time = 6000; Maximum amount of time each script may spend parsing request data
memory_limit = 128M; Maximum amount of memory a script may consume
After you make any changes to the php.ini file you will need to reboot the apache server.
Restart the apache server by issuing the command:
# Svcadm restart cswapache2

Now let's test to make sure php is working. Let's create a file in our apache default directory.

# Emacs /export/www/index.php
Then type the following in that document:

Save this file and again point your web browser to your Solaris server's IP address. You should now get a nice php info page loading. Congratulations you have now setup your SAMP server.

VSFTP

Let's install VSFTP
# Pkg-get -i vsftpd
Let's first edit the vsftpd config file. You will want to enable the options to allow local users to connect to the server.

# Emacs /opt/csw/etc/vsftpd/vsftpd.conf

You will want to make the following changes to your config file to allow your local users to login (you might have to uncomment some of these lines so that they will be read):
anonymous_enable = NO
local_enable = YES
write_enable = YES
uncomment the xferlog_file
uncomment the data_connetion_timeout
add the following line:
chroot_local_user = YES
This will force the local user that logs in to be chrooted to his / her home directory.
The FTP server logs the user into the home directory specified by the / etc / passwd file so you need to make sure the paths are correct.
Let's see if vsftp is running:
# Svcs | grep vsftpd

It should be in an online state.

Since we made changes to the vsftp config file let's restart the vsftpd server.
# Svcadm restart cswvsftpd

PHPMYADMIN

Since we have already installed phpmyadmin with pkg-get it should already be located on our server. You can do a find to locate the folder, but it should be in the default folder. Let's move it to our www folder

#mv / opt / csw / apache2 / share / htdocs / phpmyadmin / export / www / phpmyadmin

Now you should be able to load up phpmyadmin by going to http: // yourserverip / phpmyadmin

Let's edit our config file:

# Cp /export/www/phpmyadmin/config.sample.inc.php /export/www/phpmyadmin/config.inc.php
# Emacs /export/www/phpmyadmin/config.inc.php

You will want to edit the following lines:

$ Cfg [ 'blowfish_secret'] = 'putanythinghere'; / * YOU MUST FILL IN THIS FOR COOKIE AUTH! * /
// $ Cfg [ 'Servers'] [$ i] [ 'controluser'] = 'amysqlusername';
// $ Cfg [ 'Servers'] [$ i] [ 'controlpass'] = 'amysqlpassword';

After this, your all set! Just go back to the URL http: // yourserverip / phpmyadmin

Now it should connect to your mysql server and give you a login screen. Login with your mysql user and everything should be working at this point.

That concludes this tutorial.

[ad_2]

Source by Josh Bellendir