Posts Tagged ‘linux mint 18’
Turn a Physical Linux or Windows Machine Into A Virtual Machine for Free
We will be focusing on creating this masterpiece in the Windows environment, but don’t worry the same principles can be used in any operating system that can run Virtual Box.
List of Software and Hardware needed:
Software:
-Virtual Box and Extension Pack
-Windows 7 or higher PC or most any Linux Distro
-Redo Backup and Recovery ISO
-YUMI installer
Hardware:
-USB flash drive
-USB hard drive
The overall benefits of performing this procedure is three fold. One, cost savings on power, climate control and space required will be seen instantly. Two, manageability and scalability dramatically increases due to working with virtual disks and virtual networks that can scaled up or down with finer grained control. Three, redundancy and faster disaster recovery that is provided by cloud services. Especially when tied into your already existing network infrastructure for a seamless transition when disaster strikes.
While this process can be completed in numerous ways with different software, this is the way that I am familiar with and all the tools needed are free.
Sounds daunting? No sweat, but where do we start first?
Well, we need to get an image of the physical machine onto removable media (USB hard drive). I recommend a USB hard drive vs. just a USB flash drive due to the space the image will take up. We will also need a USB flash drive at least 2 GB in size to use as a bootable media for Redo Backup and Recovery.
Plug the USB hard drive into your USB port and open up the folder structure. Create a folder in a location that you can remember I.e D:”Your Computer’s Name”. This is the location where we will install the files from our initial physical image copy to. After this is complete, eject your USB hard drive by right clicking on the “Safely Remove Hardware” icon in your taskbar and click on Eject “whatever your USB hard drive is named”, unplug the USB HDD.
Next, we need to create a bootable USB to load Redo Backup and Recovery on. Download a small program called “YUMI”. YUMI will create a bootable USB flash drive for Redo Backup and Recovery on it. Also grab a copy of Redo Backup and Recovery, save both files to your desktop or location of choice.
Now, run YUMI and choose your USB flash drive from the list (Remember to choose your USB drive and not your USB HDD that should be unplugged anyway!). Choose “Redo Backup and Recovery” from the software list that you can create an installer for. Click the “Browse” button to look for the Redo Backup and Recovery.iso to include on the install. Finally click on “create” to start the bootable Redo Backup and Recovery bootable USB creation process. When this is done, YUMI will ask you if you want to add any more distros, just say “no”. Eject your USB out of the computer using the “Safely Remove Hardware” icon in your taskbar and click on Eject “whatever your USB flash drive is named” and unplug the USB flash drive. Please keep Redo Backup and Recovery.iso we will need it later.
Make sure that the physical computer that you would like to virtualize is in a powered down state, if not please power down the computer. Insert only the USB flash drive into the computer. Power up the computer and press the correct key to access to boot menu or make sure that the USB drive is set to boot before the computers internal hard drive. Choose the USB entry to boot from, YUMI should now load. Choose the entry for “Tools” then “Redo Backup and Recovery”. Press enter on the Redo menu to start the mini recovery O/S. When Redo Backup and Recovery is loaded, insert your USB HDD and give it about 20 seconds.
Open Redo Backup and Recovery Software:
1. Choose “Backup”
2. Choose your disk to backup (your physical computer’s disk)
3. Choose your partitions to backup (typically it would be all partitions and MBR)
4. On the “Destination Drive” screen choose “Connected directly to my computer” and click browse.
5. Locate the file folder we made earlier I.e D:”Your Computer’s Name” click OK.
6. Choose a name for the disk image. I will usually choose the date, click next. The backup process will take anywhere from 1 hr to 3 hrs depending on hard drive capacity and computer speed.
Congratulations, at this point you have made a full backup of your physical machine. Please click “Close” on the Redo and Recovery Backup program and choose the power button in the bottom right corner of your screen. Select “Shutdown” and let the computer shutdown. Remove both USB flash drive and USB HDD and boot up any computer that has Windows 7 or higher installed on it.
Now, lets turn that physical machine into a virtual machine!
Open up Virtual Box and choose “New”. Give your Virtual Machine a name and choose the type of virtual machine it will be as well as the version. Choose your memory size, I usually a lot 2 GB=2048 MB if I plan on running it on a machine that has 4 GB of ram physically installed. Create a new hard drive, choose VHD as the hard drive file type, click next. Choose “Dynamically allocated” for the storage, click next. Give your VHD hard drive a name, I will usually name it by whats running on it, hence name it what you named your computer. Make the VHD hard drive large enough to store your operating system, I will usually choose 200GB to be on the safe side. Again this depends on how big your physical machine’s data was. You are now returned to the Virtual Box Manager screen with your new VM present. Make sure your Virtual Box extension has been installed. Obtain the extension for your software version and install it like so:
In Virtual Box, click File–>Preferences–>Extensions–>Add Package–>Locate extension file and select it. It will be automatically installed.
Prepare the conversion! Use only Option A or Option B:
Option A: If you can get USB support working in Virtual Box:
Make sure that you have installed the extension pack and setup USB access properly, if you are having some troubles, refer to the Virtual Box document here:
https://www.virtualbox.org/manual/ch03.html#idp55342960
In Virtual Box, click on your VM name and choose “Settings” at the top, choose “Storage”. Click on the empty CD/DVD icon and then the CD/DVD icon on the right under “Attributes” and select your Redo Backup and Recovery ISO and click “OK”. At this point you have the Redo Backup and Recovery.iso at the ready and a blank VHD to install to. All you need to do now is insert your USB hard drive and skip over Option B because you do not need to perform it.
Option B: If you cannot get USB support to work in Virtual Box. No problem, its what happened to me so I found a way around it.
In Virtual Box, click on your VM name and choose “Settings” at the top, choose “Storage”, choose “Add hard disk” next to Controller:Sata or Controller:IDE whatever you have. Choose “Create new disk”, choose VHD and again make it 200GB Dynamically allocated and name it “Installer”. Underneath “Storage Tree” click on the empty CD/DVD icon and then the CD/DVD icon on the right under “Attributes” and select your Redo Backup and Recovery ISO and click “OK”. At this point you have the Redo Backup and Recovery.iso at the ready and a blank VHD which is named after your computer and another black VHD named Installer. Now close Virtual Box and right click on “Computer” and choose “Manage”. Left click on “Disk Management” then right click on “Disk Management” again and choose “Attach VHD”. Browse for the location of your Installer VHD that you created in Virtual Box, usually in the “My Documents” folder and click okay. Now you can copy the physical computer backup image that we took earlier from D:”Your Computer’s Name” to Installer VHD. After the contents have been copied, right click on computer management again and click on “Detach VHD”. Open up Virtual Box and proceed to the next step.
Lets Convert This Thing!
Once you have either USB support or the Installer VHD setup and the Redo Backup and Recovery ISO mounted. Press “Start” on your VM name in Virtual Box. You will be met the familiar Redo Backup and Recovery boot menu, press enter to proceed. Launch the Backup and Recovery program if it did not start automatically. Choose “Restore”. In a nutshell, you will choose where your Image backup is “The Source Drive” (your USB HDD or Installer VHD if applicable) and where to install the image (blank VHD named after your computer). After you have chosen to install into the blank VHD, confirm the prompt to overite any data and let the recovery process begin. After this is finished, click close and shutdown Backup and Recovery as you did before. The VM should stop running. Click on “Settings” from the Virtual Box Manager and unmount the Backup and Recovery ISO and the Installer VHD if applicable. Leave your VHD with the name of your computer or whatever you named it and click on “OK” to go back to the Virtual Box Manager. Click on “Start”, you should now be looking at a fully virtualized version of your physical computer!
Celebrate the many uses of this power little VHD!
You can transport this VHD and include it in any Virtual Box VM instance or even VMware if you are so inclined. You can run it on your local premises or deploy it in the cloud. A cloud instance of this VM would either require running Virtual Box on your cloud computing instance, or running it natively in your cloud computing space if the hosting provider supports it.
Common Gotchas and Troubleshooting:
Q: When trying to run my Linux based virtual machine, I get ” not syncing: VFS: Unable to mount root fs on unknown-block(0,0) “?
A: This is because in the backup and recovery process all the entries for hda##, hdb## and so forth have been converted to sda## extc. First, copy your precious VHD so you won’t lose your work if something goes wrong. Then all you will have to do is mount Backup and Recovery ISO, start your VM again and bring up a terminal session. Mount the Root partition and edit the entries in GRUB or Lilo to the proper boot device. For example: in GRUB, the entries are included in menu.Ist and fstab. In Lilo they are included in /etc/lilo.config and then /sbin/lilo -v to write the changes.
Q: When trying to run my Windows based virtual machine I get a boot error?
A: Obtain a copy or a Windows disc and mount it inside of Virtual Box making sure it is set to boot first. Choose the “Repair” option. Choose “Start Up Repair” and let it run. If this does not do the trick, go back into the “Repair” option and choose “Command Prompt”. Try these commands one at a time, shutting down and unmounting the Windows disc each time to check if the problem has been corrected:
bootrec.exe /FixMbr. Then restart to see if resolved. If no result, try:
bootrec.exe /FixBoot. Then restart to see if resolved. If no result, try:
bootrec.exe /RebuildBcd. Then restart to see if resolved. If no result, try:
You may have to remove your BCD folder by running these commands one line at a time without quotes:
“bcdedit /export C:BCD_Backup
c: <—- Only if your Windows installation is installed on C:
cd boot
attrib bcd -s -h -r
ren c:bootbcd bcd.old
bootrec /RebuildBcd”
[ad_2]
Source by David T Goodwin
Where Is Microsoft Excel Used?
Whether you work at an accounting firm, a marketing company, an auto dealership, a school attendance office, a manufacturing plant’s human resources department, or an office associated with city, county, state or federal government, chances are, you’ll be called upon to use and learn Excel.
Just about every workplace has a demand for Excel, the computing world’s most commonly used software program for comparative data analysis. Excel has been available in various incarnations for more than a decade. Each subsequent release takes the program to new territory.
Popularly known as the best spreadsheet program on the market, Excel is powerful, easy to use, and remarkably efficient. Excel is highly interactive. Its spreadsheet cells are arranged in a collection of rows and columns, each of which can hold a number, a text string, or a formula that performs a function, such as calculation. It’s easy to copy and move cells as well as modify formulas. The spreadsheet is displayed on the computer screen in a scrollable window that allows the document to be as deep or as wide as required.
Working for a major newspaper in Northern California, I was one of several reporters involved in the annual evaluation of our county’s economy. The job involved collecting data that would be punched into Excel spreadsheets that ultimately ranked information according to the category of statistics being reviewed.
The beauty of Excel, from the standpoint of newspaper research projects, is that you can use formulas to recalculate results by changing any of the cells they use. With this model, you can use the same spreadsheet data to achieve various results by simply defining and changing formulas as desired. It is this feature that makes Excel so useful in so many different arenas.
With a click of the mouse, we reporters were able to get answers to a wide variety of questions. Which employers had the greatest number of workers? Which ones had the highest amount of gross annual receipts? Which ones appeared to be growing and which ones had declining sales? What was the volume of real estate loans and had there been a decline or increase from the previous year?
We looked at local and national retail, services, financial institutions, government entities, agriculture, the wine industry, tourism and hospitality, manufacturing, residential and commercial real estate, everything imaginable.
Excel allowed us to examine ratios, percentages, and anything else we wanted to scrutinize. Finally, we were able to use Excel to compare the results to data from previous years.
Since reporters tend to be former English majors, most of those who worked on this annual project were more familiar with Microsoft Word than any other software program. Therefore, most were required to undergo Excel training. For some, learning Excel was easier than for others. A few relied on guides such as Microsoft Excel Bible. Some reporters underwent an Excel tutorial while others learned by doing.
Not only were the Excel spreadsheets crucial to the research, the format of each was published in the newspaper. Here is where some additional Excel functions came into play. Editors were able to make the spreadsheets more visually appealing by using colors and shading, borders and lines, and other features that made the spreadsheets easy for readers to decipher.
Wearing another of my several hats in the newsroom, I often wrote articles concerning the local job market. I found proficiency in Excel was a requirement for a wide variety of employment positions and that area recruiting firms offered their clients opportunities to take free or low-cost Excel tutorials in preparation for the workplace. Most employers expect job candidates to already know the software that the work will require and don’t want to have to train new hires.
Don’t kid yourself. If you’re seeking any kind of office work, you’ll need to know not only Microsoft Word but also Excel.
Excel and Microsoft are trademarks of Microsoft Corporation, registered in the U.S. and other countries.
[ad_2]
Source by Sheri Graves
Logging for the PCI DSS – How to Gather Server and Firewall Audit Trails for PCI DSS Requirement 10
PCI DSS Requirement 10 calls for a full audit trail of all activity for all devices and users, and specifically requires all event and audit logs to be gathered centrally and securely backed up. The thinking here is twofold.
Firstly, as a pro-active security measure, the PCI DSS requires all logs to be reviewed on a daily basis (yes – you did read that correctly – Review ALL logs DAILY – we shall return to this potentially overwhelming burden later…) requires the Security Team to become more intimate with the daily ‘business as usual’ workings of the network. This way, when a genuine security threat arises, it will be more easily detected through unusual events and activity patterns.
The second driver for logging all activity is to give a ‘black box’ recorded audit trail so that if a cyber crime is committed, a forensic analysis of the activity surrounding the security incident can be conducted. At best, the perpetrator and the extent of their wrongdoing can be identified and remediated. At worst – lessons can be learned from the attack so that processes and/or technological security defenses can be improved. Of course, if you are a PCI Merchant reading this, then your main driver is that this is a mandatory PCI DSS requirement – so we should get moving!
Which Devices are within scope of PCI Requirement 10? Same answer as to which devices are within scope of the PCI DSS as a whole – anything involved with handling or with access to card data is within scope and we there for need to capture an audit trail from each of them. The most critical devices are the firewall, servers with settlement or transaction files and any Domain Controller for the PCI Estate, although all ‘in scope’ devices must be covered without exception.
How do we get Event Logs from ‘in scope’ PCI devices?
We’ll take them in turn –
How do I get PCI Event Logs from Firewalls? –the exact command set varies between manufacturers and firewall versions but you will need to enable ‘logging’ via either the Firewall Web interface or the Command Line. Taking a typical example – a Cisco ASA – the CLI command sequence is as follows logging on no logging console no logging monitor logging a.b.c.d (where a.b.c.d is the address of your syslog server) logging trap informational This will make sure all ‘Informational’ level and above messages are forwarded to the syslog server and guarantee all logon and log off events are captured.
How do I get PCI Audit Trails from Windows Servers and EPoS/Tills? – There are a few more steps required for Windows Servers and PCs/EPoS devices. First of all it is necessary to make sure that logon and logoff events, privilege use, policy change and, depending on your application and how card data is handled, object access. Use the Local Security Policy You may also wish to enable System Event logging if you want to use your SIEM system to help troubleshoot and pre-empt system problems e.g. a failing disk can be preempted before complete failure by spotting disk errors. Typically we will need Success and Failure to be logged for each Event –
- Account Logon Events- Success and Failure
- Account Management Events- Success and Failure
- Directory Service Access Events- Failure
- Logon Events- Success and Failure
- Object Access Events- Success and Failure
- Policy Change Events- Success and Failure
- Privilege Use Events- Failure
- Process Tracking- No Auditing
- System Events- Success and Failure
* Directory Service Access Events available on a Domain Controller only
** Object Access – Used in conjunction with Folder and File Auditing. Auditing Failures reveals attempted access to forbidden secure objects which may be an attempted security breach. Auditing Success is used to give an Audit Trail of all access to secured date, such as, card data in a settlement/transaction file/folder.
*** Process Tracking – not recommended as this will generate a large number of events. Better to use a specialized whitelisting/blacklisting technology l
**** System Events – Not required for PCI DSS compliance but often used to provided extra ‘added value’ from a PCI DSS initiative, providing early warning signs of problems with hardware and so pre-empt system failures. Once events are being audited, they then need to be relayed back to your central syslog server. A Windows Syslog agent program will automatically bind into the Windows Event logs and send all events via syslog. The added benefit of an agent like this is that events can be formatted into standard syslog severity and facility codes and also pre-filtered. It is vital that events are forwarded to the secure syslog server in real-time to ensure they are backed up before there is any opportunity to clear the local server event log.
Unix/Linux Servers– Enable logging using the syslogd daemon which is a standard part of all UNIX and Linux Operating Systems such as Red Hat Enterprise Linux, CentOS and Ubuntu. Edit the /etc/syslog.conf file and enter details of the syslog server.
For example, append the following line to the /etc/syslog.conf file
*.* @(a.b.c.d)
Or if using Solaris or other System 5-type UNIX
*.debug @a.b.c.d
*.info @ a.b.c.d
*.notice @ a.b.c.d
*.warning @ a.b.c.d
*.err @ a.b.c.d
*.crit @ a.b.c.d
*.alert @ a.b.c.d
*.emerg @ a.b.c.d
Where a.b.c.d is the IP address of the targeted syslog server.
If you need to collect logs from a third-party application eg Oracle, then you may need to use specialized Unix Syslog agent which allows third-party log files to be relayed via syslog.
Other Network Devices Routers and Switches within the scope of PCI DSS will also need to be configured to send events via syslog. As was detailed for firewalls earlier, syslog is an almost universally supported function for all network devices and appliances. However, in the rare case that syslog is not supported, SNMP traps can be used provided the syslog server being used can receive and interpret SNMP traps.
PCI DSS Requirement 10.6 “Review logs for all system components at least daily” We have now covered how to get the right logs from all devices within scope of the PCI DSS but this is often the simpler part of handling Requirement 10. The aspect of Requirement 10 which often concerns PCI Merchants the most is the extra workload they expect by now being responsible for analyzing and understanding a potentially huge volume of logs. There is often a ‘out of sight, out of mind’ philosophy, or a ‘if we can’t see the logs, then we can’t be responsible for reviewing them’ mindset, since if logs are made visible and placed on the screen in front of the Merchant, there is no longer any excuse for ignoring them.
Tellingly, although the PCI DSS avoids being prescriptive about how to deliver against the 12 requirements, Requirement 10 specifically details “Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6”. In practice it would be an extremely manpower-intensive task to review all event logs in even a small-scale environment and an automated means of analyzing logs is essential.
However, when implemented correctly,this will become so much more than simply a tool to help you cope with the inconvenient burden of the PCI DSS. An intelligent Security Information and Event Management system will be hugely beneficial to all troubleshooting and problem investigation tasks. Such a system will allow potential problems to be identified and fixed before they affect business operations. From a security standpoint, by enabling you to become ‘intimate’ with the normal workings of your systems, you are then well-placed to spot truly unusual and potentially significant security incidents.
For more information go to http://www.newnettechnologies.com
All material is copyright New Net Technologies Ltd.
[ad_2]
Source by Mark Kedgley
Automatically Capturing, Saving and Publishing Serial RS232 Data to the Web
This article will describe setting up a system that will utilize a lightweight command line (CLI) install of Linux (Ubuntu in this example) to capture RS232 (serial) information, capture it and upload it to an ftp server.
This is useful for applications including data collection from sensors or other serial devices such as amateur packet radio – and will utilize the data on a web site or other remote machine.
The process will involve starting the program ‘screen’ and ‘minicom’ when the machine starts up. The ‘screen’ output will be captured to text file. This text file will be sent up to an ftp server using ‘lftp’, scheduled via a ‘cron’ job.
This article will not cover installing a command line Linux such as Ubuntu minimal – it is assumed you have a functional system. When the machine starts up, it should start a detached ‘screen’ session, starting minicom (terminal program) within – logging the screen output to file.
To install screen and minicom, use: sudo apt-get install screen minicom
Verify your minicom settings
Run: sudo minicom -r, and set the default port and clear out the init strings since the startup script runs as root. Ctrl-A o, set to ttyS0, 1200 (or the appropriate other baud rate). Note that I also had to edit sudo nano /etc/minicom/minirc.dfl to get it working properly — even though I did sudo minicom -r — contents of my file:
# Machine-generated file – use “minicom -s” to change parameters.
pu baudrate 1200
pu bits 8
pu parity N
pu stopbits 1
pu mhangup
pr port /dev/ttyS0
Get screen to auto-start at boot and run minicom:
In /etc/init.d, create a startup script: sudo nano /etc/init.d/startHAM
contents:
#!/bin/sh
/usr/bin/screen -d -m -L /usr/bin/minicom
modify the file so it will run at startup:
sudo chmod +x /etc/init.d/startHAM
sudo update-rc.d startHAM defaults 80
—
I could not get logging to work by passing “-L”, so edit the file: sudo nano /etc/screenrc
add the lines:
log on
It is recommended your reboot after editing these files. After you reboot you can see if it is working by typing sudo screen -r to resume the screen session – you should be in minicom and should be able to see/type to your TNC (or device).
If that works, you can check the log file – mine is saved to the root. nano /screenlog.0
You can tail it to watch in real time anything written to the log: tail -f /screenlog.0
You may want to sudo screen -r, press return / mheard, etc. (for a packet modem) to generate some text, then ctrl-a d (detach) and then nano /screenlog.0 and go to the end – sometimes the control characters may hide output, but it is really there.
Now you have screen and minicom starting at startup and all the screen output is being save to the screenlog.0 file.
Uploading the screenlog.0 file to your web site ftp server:
The program lftp can be used to synchronize files to an ftp server (your web host). To install lftp, type: sudo apt-get install lftp
Create an lftp script file “SyncHAM.txt” in your home directory: sudo nano SyncHAM.txt
Contents:
open ftp: //username:password@mywebhostaddress.com
set ftp:ssl-allow no
mput -O /public_html/myhostbasepath /screenlog.0
-O is the base path on the web host, after that is the path to the file to be sent (screenlog).
note the ssl-allow no is optional if you host does not support ssl, the -R means reverse – send the file UP to ftp.
To test it, type sudo lftp SyncHAM.txt
Setup Cron Job
edit the file: sudo nano /etc/cron.daily/hamsync
contents:
#!/bin/sh
lftp -f /home/cmate/SyncHAM.txt
echo “done with SyncHAM.txt”
make it executable:
sudo chmod +x hamsync
Be sure to reboot the machine. You should now have a functional system to capture the serial data and upload it daily to the ftp server.
[ad_2]
Source by Scott Szretter
How To Repair The Acer D2D Recovery
This tutorial can also help to do it on other computer brands
Disclaimer : First of all you must be aware that some of the operations to come can cause irreversible change on your hard disk, I recommend to you and I will never make it enough make a backup of your system before launching you in any hazardous operation. Any damage and/or modification done on your system will be under your whole responsibility. The following procedures were done on a Acer Aspire 5102wlmi and some also functioned on a Dell Inspiron 9400/1705.
As you know, the Acer computers and those of other manufacturers are now delivered with a system of restoration installed in a hidden partition of your hard disk. This system launches out while pressing keys ALT+F10 simultaneously. And sometimes for various reasons this system ceases functioning.
The first cause often comes owing to the fact that function D2D Recovery is disable in the bios (menu principal).
The solution : enable the function and try to press keys ALT+F10 during the starting of the computer.
The second cause : in this case hidden partition PQSERVICE was erased or damaged, or you replaced the disc and in this case it is not present.
The solution : If you did not previously make a backup of your system by making an image disc. It will not be possible to use the D2D recovery. Your only hope will be to have in your possession Acer Recovery CD/DVD.
And the third causes : the Acer Master Boot Record (MBR)was damaged or replaced by non-Acer MBR. As long as partition PQSERVICE is present or that you can put the hand on the necessary Acer files you can reinstall the MBR of Acer.
The solution :
First method : on a functional Windows system:
1 disable the D2D recovery option of the bios.
2 open a Windows session with an account administrator.
3 Download, unzip and launch partedit32(registration required for download).
4 Identify the Pqservice partition by its size (at the bottom of the partedit window there is a partition information box) it is a small sized partition approximately 2 to 6 Go.Once made change the type of your partition into 0C and save. Restart and open a session with an administrator account, you should be able to navigate to the PQservice partition. Seek these two files mbrwrdos.exe and rtmbr.bin once localised open a command prompt and launch this command mbrwrdos.exe install rtmbr.bin, this will install the Acer MBR. Close the command prompt window restart again Windows go into the bios and reactivate the D2D recovery. Now ALT+F10 should launch Acer recovery at the starting of the computer.
Second method : On a nonfunctional Windows system.
For that you must use a Linux distribution (for me Mandriva provided all the tools necessary).
1 Boot on the Mandriva install CD/DVD the boot menu will give you the possibility to repair or to restore the Windows boot loader.
2 If that is not enough launch an installation of linux(this will be an occasion to test this terrible OS) and choose LILO as boot loader(a boot menu that allow you to choose between several operating systems) once finished the installation. Restart your computer in the boot loader menu system you will have at least 2 Windows options the first generally point to PQservice choose it and you will boot directly into acer D2D recovery.
The last solution is the simplest one, just note that during the Linux installation you will have to resize your Windows partition to create a place for a new Linux partition it is the most perilous part because irreversible, therefore take your precautions at this time.
[ad_2]
Source by Alan Bradock
Easy Steps to Stop SMTP AUTH Relay Attack and Identify Compromised Email Account for Postfix
Today lots of the email application such as Sendmail, Postfix, or even MS Exchange has been re-designed to reduce the possibility of become an ‘spam-relay’. From our experience, most of the SMTP AUTH relay attack is caused by the compromised of the weakly password protected user accounts. Once the accounts discovered and been compromised. Spammer authenticate using the user credentials, they are granted to relay via the server, which is then used to send spam.
Below are the easy steps to stop these spam emails quickly and identify which account(s) has been compromised.
Step1: Stop on on-hold mail queue.
Large amount of spam emails keep queueing your mail spool. What even worst is all the spam it fill up all your /var. Thus, it is always to hold the mail queue for temporary until you find out the which account has been exploited by spammer and send a large amount of emails.
Step2: Check your mail log.
Go to /var/log/maillog to have a quick look on the line with from:<>. You might see lots of email domain name there are not belong yo your organization. This is due to the spammer is faking the mail from:<>.
Step 3: Identify compromised account authenticating SMTP AUTH connection
Next, let us check those email accounts that has been exploited. Run a have cat grep sasl_username and sort it. You should see a long list of the login attempt and session for those exploited account. You can also do a quick calculation by running wc -l command to see total sessions for a particular user.
Step4: Disable the exploited email account.
Once, we have SASL_username string, which is the user account. You are advised to disabled or change the password to complex password.
Step 5: Move the mail queue or delete the spam email
Now, we have to deal with our mail queue. Easier and fastest way is to move your mail queue and do the housekeeping later. Or, you can delete those spam email using Bash script.
Step 6: Release Mail queue
Remember to release mail queue after our housekeeping process and keep on monitoring of the mail traffic.
[ad_2]
Source by James Edward Lee
How To Quickly Make A Bootable USB Stick With FreeBSD
Install FreeBSD, or use an existing FreeBSD installation, and follow these steps:
1) First, you need to prepare and format your USB stick:
fdisk -BI /dev/da0
bsdlabel -B -w da0s1
newfs -U -O1 /dev/da0s1a
boot0cfg -v -B da0
(“-U -O1” [“O” like in Olympus, not zero] is for UFS1 which provides much faster copying than UFS2; if you decide for UFS2, type “-U -O2” – but expect that the copying will be slower)
2) Then mount it: mount /dev/da0s1a /usb
3) Copy all directories (FreeBSD) to the stick
4) After copying, modify the /usb/boot/loader.conf (explained below)
5) In the /boot directory on your USB stick you must have MFS (Memory File System – mfsroot.gz), which you will make (instructions are below)
6) Modify your /etc/fstab in MFS and put the following line (only) there:
/dev/md0 / ufs rw 0 0
7) After you boot your computer with the stick, you will be in the MFS environment from which you will mount your USB stick with mount_nullfs (described below)
Modification of /boot/loader.conf on your USB stick
You must have the following lines in your /boot/loader.conf (some lines are optional):
mfsroot_load=”YES”
mfsroot_type=”mfs_root”
mfsroot_name=”/boot/mfsroot”
nullfs_load=”YES”
splash_bmp_load=”YES”
vesa_load=”YES”
geom_uzip_load=”YES”
geom_label_load=”YES”
bitmap_load=”YES”
bitmap_name=”/boot/splash.bmp”
snd_driver_load=”YES”
kern.maxfiles=”25000″
kern.maxusers=”64″
vfs.root.mountfrom=”/dev/md0″
# Additional filesystem drivers
udf_load=”YES”
linux_load=”YES”
fuse_load=”YES”
ntfs_load=”YES”
ext2fs_load=”YES”
reiserfs_load=”YES”
Making your own MFS
FreeBSD, after the kernel boots, can use the root file system in memory (mfsroot_load=”YES” command in /boot/loader.conf will do the trick). To build such a memory file system, type the command:
dd if = /dev/zero of = mfsroot bs = 1024k count = 42
The mfsroot file of about 40 MB in size will be created. You need to format it, mount it and copy the most important files into it from your FreeBSD system (/bin, /sbin, /etc, /root….):
mdconfig -a -f mfsroot md0
newfs /dev/md0
mount /dev/md0 /mnt
Once copied, you must unmount it and gzip it: gzip mfsroot
Optionally, you can chroot it to see if everything works, then copy the mfsroot.gz to /usb/boot onto your USB flash drive (or disk). If you think it may be a problem to pick the most important files for your MFS (from your FreeBSD installation), search for mfsbsd in Google and either use its toolset or the MFS image alone (contained in the downloadable ISO of mfsbsd).
After booting from the USB stick (you will jump into MFS), you must mount the physical USB stick:
/sbin/mount -o ro /dev/da0s1a /usb
/sbin/mount_nullfs /usb/boot /boot
/sbin/mount_nullfs /usb/usr /usr
The above commands will help you use the big /usr directory on your USB stick instead of the /usr dir in MFS. mount_nullfs /usb/boot /boot is optional, as in your MFS/boot directory only the following files are needed for the little MFS to boot (/boot/kernel directory in MFS): geom_label.ko, geom_uzip.ko, zlib.ko a their debug symbols (zlib.ko.symbols, etc.). By mounting the /usb/boot dir via mount_nullfs into the /boot directory in your MFS you will be able to load kernel modules.
[ad_2]
Source by Juraj Sipos
Technical Writing – Components Of Windows User Interface In Software Documentation (2) – Window
User interface documentation, one of the important tasks in software documentation, requires clear and consistent definition of all interface components. In this second part of the series, we continue with our survey of the most important interface components that a technical writer should be familiar with.
NOTE: Windows, Mac and Linux machines all have different user interfaces, depending on the particular Operating System (OS) (or “distribution” in the case of Linux) installed on your machine. This series is limited to the Windows interface only.
First of all, let’s clarify the conceptual difference between a SCREEN and a WINDOW.
A SCREEN, as defined by Microsoft, is the “graphic portion of a visual output device.” It is sometimes used interchangeably with a “MONITOR” or a “DISPLAY.” Sometimes both are used together as in the retronym “monitor screen.”
A WINDOW, on the other hand, refers to the individual display area surrounded by a FRAME and display when the user clicks a button or selects certain menu options.
A “screen” displays one or more “window”(s) but not the other way around.
A “screen” has one size which is the size of the monitor. Every “window,” on the other hand, might have a different size depending on the user preference.
A “window” is a more abstract term when compared to a “screen” and that’s why although there is a “screen RESOLUTION” (number of pixels in a unit length of screen), there is no corresponding “window resolution.” There is, for example, “screen SAVER” programs. But there are no similar “window savers.” You can save and close a window but a screen, as the physical medium of the interface, is there always, no matter which window(s) it is displaying.
When there are multiple windows open in a screen, the window that is selected and responds to user commands is referred to as the window in FOCUS. By “focusing” on a window you select it and make it respond to your interaction.
[ad_2]
Source by Ugur Akinci
How to Fix Pci Sys Missing or Corrupt Error on Windows 7 or Vista
The pci.sys missing or corrupt error occurs when the file is corrupt or missing from the computer. How do you fix this? There are perhaps two most frequently used methods to resolve this issue. One is by using the recovery console option. The other is by performing the in-place upgrade function to restore the missing or corrupt file.
The recovery console is the method that is performed by many. Usually, a simple act of rebooting the system would do the trick. But this may not solve the issue permanently. You have to install windows operating system and when you go the set up page, repair the installation by pressing R. the shortcut key is to repair the installation process. After this step, you have to open the recovery console screen by pressing C.
After typing the right number of the Windows operating system you want to install, you have to type the local administrator password if it is called for. Then, enter the suitable commands on the console prompt, such as map so that you can obtain the driver letter. After the commands are entered, you can exit the screen and restart Windows normally. This time, you would not get the screen that says that the PCI file is corrupt or missing.
Another simple way to fix this error is to run command prompt. Make sure that you open the file as the system administrator as only the admin can do major changes and limited users cannot. On the screen type, “sfc /scannow” and hit the Enter key. A scan opens and at the end of the scan the error would mostly be fixed. Restart the computer to make sure you do not get the error message again. These are some of the most commonly used methods to fix this error.
[ad_2]
Source by Celly Kayser
Nagios Log Monitoring – Monitor Log Files in Unix Effectively
Nagios Log File Monitoring: Monitoring log files using Nagios can be just as difficult as it is with any other monitoring application. However, with Nagios, once you have a log monitoring script or tool that can monitor a specific log file the way you want it monitored, Nagios can be relied upon to handle the rest. This type of versatility is what makes Nagios one of the most popular and user friendly monitoring application that there is out there. It can be used to effectively monitor anything. Personally, I love it. It has no equal!
My name is Jacob Bowman and I work as a Nagios Monitoring specialist. I’ve come to realize, given the number of requests I receive at my job to monitor log files, that log file monitoring is a big deal. IT departments have the ongoing need to monitor their UNIX log files in order to ensure that application or system issues can be caught in time. When issues are known about, unplanned outages can be avoided altogether.
But the common question often asked by many is, what monitoring application is available that can effectively monitor a log file? The plain answer to this question is NONE! The log monitoring applications that does exist require way too much configuration, which in effect renders them not worthy of consideration.
Log monitoring should allow for pluggable arguments on the command line (instead of in separate config files) and should be very easy for the average UNIX user to understand and use. Most log monitoring tools are not like this. They are often complex and require time to get familiar with (through reading endless pages of installation setups). In my opinion, this is unnecessary trouble that can and should be avoided.
Again, I strongly believe, in order to be efficient, one must be able to run a program directly from the command line without needing to go elsewhere to edit config files.
So the best solution, in most cases, is to either write a log monitoring tool for your particular needs or download a log monitoring program that has already been written for your type of UNIX environment.
Once you have that log monitoring tool, you can give it to Nagios to run at any time, and Nagios will schedule it to be kicked off at regular intervals. If after running it at the set intervals, Nagios finds the issues/patterns/strings that you tell it to watch for, it will alert and send out notifications to whoever you want them sent to.
But then you wonder, what type of log monitoring tool should you write or download for your environment?
The log monitoring program that you should obtain to monitor your production log files must be as simple as the below but must still remain powerfully versatile:
Example: logrobot /var/log/messages 60 ‘error’ ‘panic’ 5 10 -foundn
Output: 2—1380—352—ATWF—(Mar/1)-(16:15)—(Mar/1)-(17:15:00)
Explanation:
The “-foundn” option searches the /var/log/messages for the strings “error” and “panic”. Once it finds it, it’ll either abort with an 0(for OK), 1(for WARNING) or 2(for CRITICAL). Each time you run that command, it’ll provide a one line statistic report similar to that in the above Output. The fields are delimited by the “—“.
1st field is 2 = which means, this is critical.
2nd field is 1380 = number of seconds since the strings you specified last occurred in the log.
3rd field is 352 = there were 352 occurrences of the string “error” and “panic” found in the log within the last 60 minutes.
4th field is ATWF = Don’t worry about this for now. Irrelevant.
5th and 6th field means = The log file was searched from (Mar/1)-(16:15) to (Mar/1)-(17:15:00). And from the data gathered from that timeframe, 352 occurrences of “error” and “panic” were found.
If you would actually like to see all 352 occurrences, you can run the below command and pass the “-show” option to the logrobot tool. This will output to the screen all matching lines in the log that contain the strings you specified and that were written to the log within the last 60 minutes.
Example: logrobot /var/log/messages 60 ‘error’ ‘panic’ 5 10 -show
The “-show” command will output to the screen all the lines it finds in the log file that contains the “error” and “panic” strings within the past 60 minute time frame you specified. Of course, you can always change the parameters to fit your particular needs.
With this Nagios Log Monitoring tool (logrobot), you can perform the magic that the big name famous monitoring applications cannot come close to performing.
Once you write or download a log monitoring script or tool like the one above, you can have Nagios or CRON run it on a regular basis which will in turn enable you to keep a bird’s eye view on all the logged activities of your important servers.
Do you have to use Nagios to run it on a regular basis? Absolutely not. You can use whatever you want.
[ad_2]
Source by Jonathan Rayson