Monday, November 14, 2011

Method for cooking down Pumpkin

Hello again.  Been a while, been busy, and thought I'd write something unrelated to systems administration.

If you find this helpful, leave a comment.  Thanks for reading!

One of the things that has always frustrated me around Halloween was throwing out the pumpkin that was carved just a night or two ago.  If it's a reasonable size to carve, then you're talking about throwing out at least a couple of cans for each pumpkin.

If you've ever tried to cook pumpkin before, you know the amount of work involved, scraping out the insides (and getting seeds if you enjoy eating them), then cutting up the pumpkin to cook it, and trying to find a way to get the peel off without burning your fingers (I really hate that part), and then trying to turn it into puree.  So after trying several different things, here's what my wife and I have come up with.

This method minimizes the amount of work you'll have to put in, as well as any burns you might receive from handling hot pumpkin.

Stats for one pumpkin (reasonable carving size)
Total cook time: 2 hours
Total prep time: 1.5 hours w/ seeds, 1 hour w/o seeds
Total seeds: 1 cup (approx)
Total pumpkin yield: 1 quart

Step one: Cut pumpkin in half, seed, and scrape out stringy insides.

yes, getting the seeds can be a bit of a slimy mess, but if you enjoy eating them like I do, it's worth it.  For about 15 mins of work, you end up with about a cup of seeds per pumpkin, and they're easier to get out than sunflower seeds.

If you've carved your pumpkin, you've already gone through the process of cleaning out the inside, so just cut the pumpkin in half.

Step Two: Cut pumpkin into strips no more than 1" thick.

I find that holding the pumpkin with the outer shell towards you and pushing down on the handle end of the knife works well.  I also use the largest knife we have when doing this work.  Also, cutting a strip single that has the stem and the stub where the flower was (bottom) make it easy to remove these.

Step Three: With a vegetable peeler, remove the outer shell.

When I finally thought to do this, I was surprised how easy it was.  It's a bit more like peeling carrots than potatoes, and removes the shell quickly without much effort.  You'll want the peeler at an angle, rather than the whole blade flat on the pumpkin, or it will be harder to get started; once started, it's pretty easy to get under the shell.

Step Four: In a 6 qt pot, put in 1/2 cup water (enough to cover the bottom about 1/4"), and place the pumpkin in.  Cook covered for 1 hour over med-low heat.  Pumpkin is cooked when it cuts easily with a fork.

This helps to remove the water.  You'll start with 1/2 cup, but you might have to drain it a few times to avoid having it boil over.  You many also want to cut it into smaller pieces to get it into the pot (4-6" strips).

Step Five: Pack the pumpkin in a blender, mashing out as much water as possible.  Then, puree the pumpkin.

You can actually fit 1 whole pumpkin in a blender that holds a quart.  It is preferable to have a blender that also has a dispenser on the bottom, since this is the easiest way to get the pureed pumpkin out.  I use a potato masher to press the pumpkin in.  Also, you'll want to get as much water out now as you can, before you puree the pumpkin.

Step Six: Cook puree uncovered over med-low heat to remove water, stirring occasionally, until it makes a paste about the consistency of semi-thick oatmeal. (about 1 hour)

Your pumpkin is now ready to use in recipes (pie, scones, oatmeal, cookies, butter, etc.)

Friday, July 29, 2011

VTP on Cisco Switches in a Small Company (aka: my network just drops)

Sorry it's been a while.  Here's the most recent fun bang-head-here problem I was able to resolve.

Situation:

3 Cisco Switches in an office.  1x 3750, 2x 2960S

Every so often at random intervals, the network connections for all the clients would just vanish; connectivity through the main switch was fine (used Zenoss to monitor, only reported failure of the switches, and a printer beyond them), but couldn't get to any of the clients, and they couldn't use the network, let alone the internet.

Troubleshooting:
I tried everything I could think of to identify this problem.  checked spanning tree, checked logging to see if I could catch it (this was one of those really random problems, highly unpredictable), made sure the VLANs were set correctly, had Zenoss pulling snmp data for interface utilization % on the trunks, etc.

what I noticed was the following: graphs didn't show any vertical breaks, so the interfaces never went down, even though the network connections would drop.  This meant the switch was up, and there was no problem with the physical wiring, as well as the power to the switches.

after asking someone else more knowledgeable than me, he pointed me in the direction of VTP settings.

What I learned (they probably cover this in CCNA 101): VTP is a proprietary Cisco protocol used to simplify VLAN management on many many switches (think triple digits or higher), allowing Network admins to manage them all from one point.  Makes sense, cuts down the amount of mistakes and time to configure a switch fabric.  My problem was that the three devices that were installed in the company had not been configured correctly, and since they were all non-configured when they were added, they all became servers.  Apparently, they couldn't decide which switch was the authoritative switch, and when the switch designated as the true master would change, all the VLANs would be deleted off these switches, and then added back.  Net result was the switches looked like they were going down.  Highly unpredictable, highly annoying (to everyone).

Resolution:

Set the switches to VTP transparent mode.  commands were really as simple as:

log in
config t
vtp mode transparent
write mem

some things to remember are to check your vtp status to see where you are on a given switch (show vtp status), and that you need to make sure you are not using vtp pruning when you make the change.  The change does not prevent you from connecting to the switch (some reported a delay, but I didn't experience one), but if vtp pruning is in place, it can cause problems getting your clients to connect as you change switches in the environment.  Since the environment I'm in is so small, I just set vtp transparent, since I could set the vlans on those switches, and they would still forward vtp packets.

info that I used included the following:
https://supportforums.cisco.com/thread/2029581 (be sure to read the whole forum thread)
http://www.cisco.com/en/US/tech/tk389/tk689/technologies_tech_note09186a0080094c52.shtml (main page about VTP configuration and what it is and does)
http://www.cisco.com/warp/public/473/vtp_flash/
(this really helped with my understanding of VLAN Trunking Protocol, VTP; the first problem discussed is exactly what I was facing, called Problem #1, of all things)

something else I learned (again, probably CCNA 101) was that a good protection technique on making changes where you might possibly lose connectivity to a switch is to start with the following before you make your change:

reload in {mmm|hhh:mm}
<make your change>
reload cancel (after change is complete)

this allows you to work, and if something happens that you can no longer connect to the switch, it will reload the config that worked before you started.

I'm sure there are more knowledgeable networking folks out there, but this was how I solved this problem for the time being.  Simply putting this out there for anyone who could use it; like me when I run into this problem again. (=

Friday, May 27, 2011

Setting up SNMP on OSX 10 Xserve via SSH

setting up snmpd on MAC Server OS X via ssh

verify /usr/sbin/snmpd exists
 ls /usr/sbin/snmpd

verify /usr/share/snmp/snmpd.conf exists
 ls /usr/share/snmp/snmpd.conf

if snmpd.conf doesn't exist, run:
 /usr/bin/snmpconf -i (-i is required to write the file to the correct location)

    set the following options:
 
 default = all
 1 (snmpd.conf)
 1 (access control setup)
  3 v1/2c ro community name
   <ro_community>
  f
 4 (Agent operating mode)
  2 (system user agent runs as)
   root
  f
 5 (system information setup)
  1 (physical location of system)
   <system_location>
  f
 6 (trap destinations)
  2 (v2c trap receiver)
   <monitoring_system_ipaddr>
   [ENTER]
   [ENTER]
  3 (v2c inform receiver)
   <monitoring_system_ipaddr>
   [ENTER]
   [ENTER]
  5 (default trap sink community)
   <ro_community>
  f
 f
    q


starting snmpd:
 /usr/sbin/snmpd

restart snmpd:
 kill -HUP <pid>

finally, to make sure it runs at boot time:
=======================================================================================
from: http://scott.wallace.sh/2009/12/04/enabling-snmp-in-mac-os-x-10-6-snow-leopard/
---------------------------------------------------------------------------------------
Under Snow Leopard there is a slight change to the way services are enabled.
-w       Overrides the Disabled key and sets it to false. In previous versions, this
         option would modify the configuration file. Now the state of the Disabled key
         is stored elsewhere on-disk.

So, to enable the SNMP daemon correctly:
$ sudo launchctl load -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist
=======================================================================================

Monday, May 23, 2011

Configuring DNS Servers on OS X via SSH

Need to set up/change the DNS servers on a Mac OS X system, using SSH.  Command to use is networksetup.  For instance:

$ networksetup /?

for all it's gory details.

What concerns me today is just DNS config for the system.  So, without further adiou, commands are in bold:

user:~ localhost$ networksetup -listallnetworkservices
An asterisk (*) denotes that a network service is disabled.
Ethernet 1
Ethernet 2
*Built-in Serial Port (1)
FireWire
user:~ localhost$ networksetup -getdnsservers Ethernet\ 1  (observe your character escape sequences)
192.168.1.10
192.168.1.9
user:~ localhost$ networksetup -getdnsservers Ethernet\ 2
192.168.1.10
192.168.1.9
user:~ localhost$ sudo networksetup -setdnsservers Ethernet\ 1 10.75.66.2
Password:
user:~ localhost$ networksetup -getdnsservers Ethernet\ 1
10.75.66.2
user:~ localhost$ sudo networksetup -setdnsservers Ethernet\ 2 10.75.66.2
user:~ localhost$ networksetup -getdnsservers Ethernet\ 2
10.75.66.2
user:~ localhost$

Friday, May 20, 2011

Making X work on RHEL/CEntOS 5 after VMWare P2V Import

Used the VMWare Standalone Converter running on my local machine to import a RHEL/CEntOS 5 Linux system, and afterward, was greeted with the following (this is cli, the gui had it's own errors):

-----snip-----
[root@qa01 ~]# startx
xauth:  creating new authority file /root/.serverauth.9175
xauth:  creating new authority file /root/.Xauthority
xauth:  creating new authority file /root/.Xauthority


X Window System Version 7.1.1
Release Date: 12 May 2006
X Protocol Version 11, Revision 0, Release 7.1.1
Build Operating System: Linux 2.6.18-164.6.1.el5 x86_64 Red Hat, Inc.
Current Operating System: Linux qa01.localdomain 2.6.18-164.11.1.el5 #1 SMP Wed Jan 6 13:26:04 EST 2010 x86_64
Build Date: 16 November 2009
Build ID: xorg-x11-server 1.1.1-48.67.el5_4.1
        Before reporting problems, check http://wiki.x.org
        to make sure that you have the latest version.
Module Loader present
Markers: (--) probed, (**) from config file, (==) default setting,
        (++) from command line, (!!) notice, (II) informational,
        (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Fri May 20 12:28:27 2011
(==) Using config file: "/etc/X11/xorg.conf"
(EE) No devices detected.

Fatal server error:
no screens found
XIO:  fatal IO error 104 (Connection reset by peer) on X server ":0.0"
      after 0 requests (0 known processed) with 0 events remaining.
[root@qa01 ~]#
-----snip-----


Some digging online led me to http://www.vmware.com/pdf/osp_install_guide.pdf

I realize it probably makes more sense to add the yum repo, but since I was in a hurry, I just pulled the files manually and did a local install.  The files I needed are listed below:

Files are located at: http://packages.vmware.com/tools/esx/4.1/rhel5/x86_64/

IMPORTANT!!! BUILD NUMBERS ARE CRITICAL!!!
      Make sure you get the build number for your version of VMWare and Guest OS

   vmware-tools-nox-8.3.2-257589.el5.x86_64.rpm

   vmware-tools-8.3.2-257589.el5.x86_64.rpm
   vmware-tools-common-8.3.2-257589.el5.x86_64.rpm

   vmware-open-vm-tools-8.3.2-257589.el5.x86_64.rpm
   vmware-open-vm-tools-common-8.3.2-257589.el5.x86_64.rpm
   vmware-open-vm-tools-nox-8.3.2-257589.el5.x86_64.rpm
   vmware-open-vm-tools-xorg-utilities-8.3.2-257589.el5.x86_64.rpm
   vmware-open-vm-tools-kmod-8.3.2-257589.el5.x86_64.rpm
   vmware-open-vm-tools-xorg-drv-mouse-12.6.4.0-0.257589.el5.x86_64.rpm
   vmware-open-vm-tools-xorg-drv-display-10.16.7.0-0.257589.el5.x86_64.rpm

Commands as follows:

wget http://packages.vmware.com/tools/esx/4.1/rhel5/x86_64/<package_name>
    (yes, this has to be done for each rpm)

wget http://packages.vmware.com/tools/keys/VMWARE-PACKAGING-GPG-RSA-KEY.pub

rpm --import VMWARE-PACKAGING-GPG-RSA-KEY.pub (this saves you from the --no-gpg-check)

yum localinstall <big_list_of_all_rpms_use_tab_complete>


So, now that you're done with all that, it's time to reboot.  Yes, this is needed (remember that kmod rpm you just installed?).  After the reboot, everything works with X just fine.

And now things work virtually like they did before.  (=

Tuesday, April 19, 2011

Setting up OpenFiler 2.3 for VMWare ESXi storage

so I'm looking to set up openfiler as a storage backend for VMWare ESXi, just wanted to include some notes here, since the examples I've found in the forums isn't what I wanted to set up (disk sizes were fairly small).

So, main thing to remember is that OpenFiler doesn't handle the automated partitioning of the hard drive you're installing it on.  Best info I've found on this (without purchasing the Manual) is on Greg Porter's Wiki.  He has much excellent info on Openfiler, and I would highly recommend reading through his info.  This list here is just intended to be a quick checklist of things that need to be done to get this set up.

really rough run-down:

1. Get disk set up in RAID 5 on Dell 2850 (or hardware of your choice), preferably in hardware RAID rather than software RAID.
2. Boot from Openfiler 2.3 install disk (x86_64 in this case.... what, you haven't downloaded it already? (=  )
3. Using Graphical install, manually partition the disk as follows:
       /boot : 100MB ext3, Fixed size, Force Primary
       / (root): 2048MB ext3, Fixed size, Force Primary
      swap : 2048MB swap, Fixed size, Force Primary
4. Finish install (time, root password, etc.)
5. update, update, update (I've had some issues with this from the webgui; however, per the last entry of this forum, using "conary updateall" from the command line worked just fine.)
6. see Greg Porter's Wiki about the iSCSI reboot issue to prevent your stores from vanishing from the network upon reboot.  Last thing you want is for your system to boot and your services to fail.

At this point, you should be able to log in to OpenFiler, configure your shares, and get rolling.

Friday, April 15, 2011

DRAC4 Password Reset

So, building out a system for my work, and need DRACs for Out-of-Band-Management.  Found some Dell Poweredge systems on ebay for cheap (2850's and 6850's) and tried to access the DRAC cards with the default login....

didn't work.  )=

Looking for information on the web has taken most of the last week, since Dell's docs don't make it clear where the tools you need are, and what needs to be installed for them to work.

Goal: have Out-Of-Band Management on a storage system (Dell 2850 running OpenFiler).

I'm eventually going to run Openfiler on this system, but have installed CentOS 5.5 x86_64 to be able to complete this reset


Someone else ran into the same problem (used Dell's tools on windows), and the command that eventually resolved this for them was RACADM.  (see forum here).

in following the link on that forum, the CD that installed was not the correct one (the CD they list is for the initial install, I was looking for something I could install on an existing system).

So, here's the route that I went (racadm tools on CEntOS/RHEL 5.X):
(for 2850, this has been used to reset a DRAC 4/I)

Get Dell's RACADM installed on the system (instructions here)

Now, you can reset the card and be on your merry way:
      racadm racresetcfg (resets the DRAC)
      watch -n 60 racadm getsysinfo (lets you know when the DRAC has finished the reset)
      racadm setniccfg -s <rac ip addr> <netmask> <gateway>

Open a browser, point it at http://<RAC ip addr>, accept the certificate, and log in with the dell DRAC default login of username: root, password: calvin

I should also mention that to get the DRAC reset, these commands need to be run locally on the system, not over the network (however, you can connect over the network after the reset has been performed).

Hopefully this will save some folks headaches related to getting into the DRAC cards they have.  Overall process should take about 30-60 mins.

Installing RACADM for Dell DRAC 4/I on RHEL/CEntOS

Had several places this has become useful.  Follow the steps below to get Dell's RACADM tools installed on a RHEL/CEntOS 5.x system

UPDATE(2011.06.25) - use the LIVE CD when doing this, and you don't have to install CEntOS to fix this (ie: fix a system that already has something installed on it).  See notes at the end for additional commands.

Install CEntOS5.x Server and Server GUI from CD/DVD

install firefox with yum:
       yum install firefox
download the Dell OpenManage Deployment Toolkit:
       wget http://ftp.us.dell.com/sysman/dtk_3.5_new_43_Linux.iso

create a directory, mount the iso there and cd to that directory:
      mkdir /mnt/dtk_3.5_new_43_Linux
      mount -o loop -t iso9660 /path/to/dtk_3.5_new_43_Linux.iso /mnt/dtk_3.5._new_43_Linux/

      cd /mnt/dtk_3.5_new_43_Linux/
from here, run the following command to install the racadm tools:
       yum --nogpgcheck localinstall RPMs/x86/smbios-utils-bin-2.2.26-3.1.el5.i386.rpm RPMs/noarch/srvadmin-omilcore-6.5.0-1.385.1.el5.noarch.rpm RPMs/x86/srvadmin-racsvc-6.5.0-1.154.1.el5.i386.rpm RPMs/x86/libsmbios-2.2.26-3.1.el5.i386.rpm RPMs/x86/srvadmin-racadm4-6.5.0-1.154.1.el5.i386.rpm

UPDATE(2011.06.25) -  run the following commands to reset the DRAC without installing the OS

service racsvc start
locate racadm (it's in /opt/dell/something/i/dont/remember)
/opt/dell/rest/of/path/racadm racresetcfg

even though this will scream that it can't access the card, the card should be reset when you reboot

Tuesday, April 5, 2011

Configuring Alfresco 3.4 for AD SSO

My apologies if this is a bit rough, but I wanted to get this out due to the intense interest related to Alfresco.  Also, I would highly recommend setting up Alfresco like this from the beginning if you can, since it allows you to manage login from one database (fewer passwords for your users to remember, fewer systems for you to manage).

Goal: configure Alfresco 3.4 Community Edition to authenticate users as follows:

Internal users use SSO through Active Directory
External users authenticate against Active Directory (non-SSO)
Account info is synchronized with Active Directory

This information is based on http://wiki.alfresco.com/wiki/Alfresco_Authentication_Subsystems

Also, I've done the standard install (everything) in the gui based installer available from Alfresco.org to a clean Centos 5.5 system

first, we have to update the authentication chain in alfresco-global.properties (see my previous post on this for location).  I added the following lines:

### Authentication Chain ###
authentication.chain=alfrescoNtlm1:alfrescoNtlm,passthru1:passthru,ldap1:ldap-ad
alfresco.authentication.authenticateCIFS=false
passthru.authentication.domain=<domain_name>
ldap-ad.authentication.active=false

remember, passthru.authentication.useLocalServer, passthru.authentication.domain and passthru.authentication.servers are mutually exclusive, so only enable one of them.


Multiple Auth Servers of the same type
---------------------------------------
If I was using two different servers with the same authentication type (ie: two different ldap servers; not possible with passthru!), we need to copy the .properties files from:
/opt/alfresco-3.4.c/tomcat/webapps/alfresco/WEB-INF/classes/alfresco/subsystems/Authentication/<auth_type>/<auth_type>.properties

to
/opt/alfresco-3.4.c/tomcat/shared/classes/alfresco/extension/subsystems/Authentication/<auth_type>/<auth_type_instance#>/<auth_type>.properties

you will need to create the directory tree below the extension subdiretory, starting with subsystems.  Remember, this is only required if you have two auth servers using the same auth type.  Check the Alfresco wiki if you aren't sure.
---------------------------------------


in looking at the ntlm-filter.properties files in the passthru and ldap-ad folders, I found that the settings of these systems was already configured for passthru to have SSO enabled.  Also, I found that if you have passthru and alfrescoNtlm set up, after an unsuccessful SSO login, the "backdoor" URL (http://<hostname_or_IP>:8080/alfresco/faces/jsp/login.jsp) will automatically display (at least in Firefox).  So this is actually as expected, since it fails through to the local login.  Don't know how this would look to the outside world, since I'm mainly using this on a company intranet right now.

So, to recap; after doing everything above, this is where I'm at:

Goal: configure Alfresco 3.4 Community Edition to authenticate users as follows:

Internal users use SSO through Active Directory - this is completed and working fine
External users authenticate against Active Directory (non-SSO) - this is completed as far as I can tell
Account info is synchronized with Active Directory - this isn't working right now, so I've missed some settings for this authentication type.  I suspect I may not have the OU/CN/DC settings correct for what AD wants to see.

Thursday, March 24, 2011

Getting Windows Deployment Services running

There are some good resources out there for Windows Deployment Services (WDS), such as the following:

technet.microsoft.com (obligatory manual reference)
Dan Stolts blog
Tom and Jason include some nitty gritty details on their blog
www.google.com (obligatory google reference)

Basically, I'm trying to set up the following:

WDS service on non-DHCP server in an AD environment with DHCP running on AD Domain Controllers only
Should note that the server is 2008 Standard R2, and the AD DC's are 2008 R2 and 2003.

DHCP scope option settings:
66 - <ip of WDS server>
67 - boot\x86\wdsnbp.com

I'm also setting this up so that unknown devices need admin approval in AD (pending devices approval in WDS), and the problem I've run into is the following:

-------------snip-------------
An error occurred while trying to create the machine account for the following  device:

 Name: install01
 OU: CN=Computers,DC=<company_name>,DC=local
 MAC Address: 00000000000000000000BC305B9C1C03
 GUID: 44454C4C560010348039B8C04F435031

 Error Information: 0x5
-----------end snip-----------

This also shows up with Task category BINLSVC and an Application Error code of 524 (google search of "microsoft wds error 524" has details).

Solution to this is at the following technet page, and included below:

Per Microsoft's Technet page:
--------------------------------

Ensure that the server has the necessary permissions

To perform this procedure, you must either be a member of the local Domain Admins group or have been delegated the appropriate authority.
To grant permissions:
  1. In Active Directory Users and Computers, locate the organizational unit that you are creating machine accounts in. The organizational unit is specified in the server properties for the Windows Deployment Services server.
  2. To view the organizational unit information, open the Windows Deployment Services MMC snap-in, right-click the server name, click Properties, and then click the Directory Services tab.
  3. Right-click the organizational unit, and then click Delegate Control to grant the Windows Deployment Services server Full permission to create and edit accounts.
Note: The computer that caused this issue is specified in the event message string. To view this information, open Event Viewer, expand Custom Views, expand Server Roles, click Windows Deployment Services, and then locate BINLSVC event 524 or 525.
--------------------------------

In my case, I opened AD Users and Computers, selected the OU where I wanted the installed systems to show up, r-click and select "Delegate Control", then had to do the following:

change "Object Types..." to Computers
enter the beginning of the system name and "Check Names"
verify computer name and click next
select "Create a custom task to delegate", click next
select "Only the following objects in the folder:"
check the "Computer objects" box
check the "Create selected objects in this folder"
leave "Delete selected objects in this folder" UNchecked
click next
check "Full Control", click next
click finish

At this point, you'll be able to name devices in the "Pending Devices" tab for the WDS role when you approve them without that annoying error.

The beauty of this is that once you have the server set up and the OS's configured for an install, you can literally just plug the computer in at it's location and PXE boot it and install the OS and pull in the user data in one fell swoop.  Also, you can use this system to manage server images as well as desktop images.  While there are other ways of installing systems, especially in a VM environment (templates, ghost images, etc.), the advantage this holds is that you can install both virtual and physical systems from this one server, and be sure that you have the same config on all your systems.  See Chapter 3 of "The Practice of Systems and Network Administration, 2nd Edition" for more wise counsel related to systems configuration and automated installation.

Wednesday, March 9, 2011

Setting up OTRS on CEntOS 5.5

OTRS (Open Ticket Request System) is a great open source ticketing system with a pretty clean interface, written entirely in Perl.  Below are some notes from setting this up on Centos 5.5, see the website above for full install instructions.

some things to remember:

run /opt/otrs/bin/otrs.checkModules to verify that everything is installed correctly, RPMForge yum repo can help with Perl packages

use generic agent to automagically move tickets/delete tickets.  This works great for deleting stuff in the junk folder.

set up 2.4.9, not 3.0 (the interface was significantly changed in 3.0, not used to it yet.  I think there was another reason for this as well, but I can't remember this right now).

remember to set up mysqld and httpd with chkconfig --levels 2345 <daemon> on

Thursday, March 3, 2011

Cheap x64 VM environment how-to: Dell 6850 w/ Intel Xeon 7140's (SL9HA)

Short note, hope this might save someone else a few "bang head here" headaches:

need: cheap VM environment with the ability to run 64-bit VM's in VMWare ESXi4.1

Solution: Dell 6850 w/ 4x Intel Xeon 7140m processors and 32GB RAM (cost, $1220)

Problem number 1: 6850's require 200-240V power.  Since I was going to use this in my home, I don't have a circuit with that voltage (think electric stove or electric dryer; these are the plugs with a diameter about half that of a CD).  had I noticed this before the purchase, wouldn't have purchased the system.  However, I was able to use it for work.

replaced existing CPUs and hooked up power, only to run into...

problem 2: system wouldn't post, wouldn't get into the BIOS config, and reported that the processors were incompatible with the system.  BIOS was at latest for 6850's (A06), motherboard part is WC983, Rev A00.  Double checked the 6850 documentation PDF that Dell put out and confirmed that the 7140M is indeed compatible (read the fine print, it was used for a bench mark).

"No problem, I'll just call Dell."  Make sure you have your system ownership information updated before doing so, or you'll get nowhere.  That was problem number 3.

After talking with Dell over two days (seems even they have to dig for this info) it turns out that you need the following parts for the Xeon 7140M (SL9HA) processors (these are mandatory):

2x Dell Part YC902 (Voltage Regulator Modules)
4x Dell Part WG189 (Heatsinks for Motherboard, N6164 will not work)
1x Dell Part PD838 (3rd VRM for Cache) CANNOT BE PART K5331!
1x Dell Part RD318 (6850 Mother Board)
4x Dell Part ND891 (Memory Risers, part N4867 did not work)
1x The rest of the server

So, is the 6850 a cheap, viable option for running 64-bit VM's in ESXi4.1?  Viable, with a few caveats.  First, make sure you have the correct voltage!  Second, you must have the Xeon 7000 series processors (Intel part number SL9HA, SL9HB, SL9HC, SL8UD, SL8UB) since these are the only ones with the VT-x technology you need.  Third, make sure you have the voltage regulators to support the processors.  Finally, make sure you have the right 6850 Motherboard (RD318 if you want to run the SL9HA's).

As for cheap? well, after getting the rest of the parts needed (VRMs, Rails, disk), the total for the unit I've put together will be about $1220 for 8 cores @ 3.4 GHz, 64MB cache, 24GB RAIDed RAM, 800 FSB, and 2x 36GB 10K U320 SCSI HDD in RAID 1.  Stallard, Inc sells comparable 1950s for about $3460 each (w/o RAM RAID), but they have to be Gen III and still only capable of 24MB cache max.  You might be able to find everything on ebay for a bit cheaper, but it's still going to cost you more than $1000.  I should probably also mention that this system is not hosting the storage (using a 2850 running OpenFiler for that for the time being, 6x 146GB 10K U320 SCSI = 730GB usable in 5+1 RAID 5, approx $400).

For those of you looking to repeat what I've done, here's a list:

Dell 6850 (liquid8technology.com has them w/ 16GB RAM for a good price on ebay)
4x 4GB (2x2GB) PC2-3200 DDR2 RAM (server-ram on ebay)
2x 36GB 10K U320 SCSI HDD (check your back plane, could be SAS)
4x Intel Xeon 7140m (SL9HA)
4x Heatsinks (WG189)
1x 6850 RD318 Motherboard
4x ND891 Memory Risers
2x YC902 Voltage Regulator Modules
1x PD838 Voltage Regulator Module (hard to find on Ebay, can be as much as $150 elsewhere)
Rails, of course

Happy Virtualizing!

Monday, February 28, 2011

Configuring Alfresco Community Edition for OpenOffice Document Transforms

this is for alfresco version 3.4.d

have been getting an error similar to the following after configuring a rule in alfresco for converting documents from MS formats to Open Document formats:

Failed to run Actions due to error: 02050010 Transformer for 'application/msword' source mime type and 'application/pdf' target mime type was not found. Operation can't be performed

since I'm now having to dig for the second time on this, figured I'd write it down.

See the following pages:

http://wiki.alfresco.com/wiki/Setting_up_OpenOffice_for_Alfresco (most details)
http://wiki.alfresco.com/wiki/Repository_Configuration (location of the .properties file)

the long and the short of it is this:

find the location of the "alfresco-global.properties" file

[root@OPS5-Alfresco ~]# locate alfresco-global.properties
/opt/alfresco-3.4.d/tomcat/shared/classes/alfresco-global.properties

if you open this file, look for the following line:

ooo.enabled=false

change to true using your favorite editor (vi, nano, etc)
restart alfresco

service alfresco restart (bounces tomcat and mysql)

to know when the alfresco system is up, use "tail -f /opt/alfresco-3.4.d/tomcat/logs/catalina.<date>.log" and look for the line:

INFO: Server startup in 136056 ms

this will confirm that when alfresco starts, OpenOffice will be started with the service and able to convert documents.  Happy uploading!

Sunday, January 23, 2011

Print from Linux to a Windows Shared Printer

setting up a print in cups to connect via samba share (or so I thought):

highly recommend the site openprinting.org, they have a very thorough list of printers, and their known working states

I have a Samsung ML-2010, so the drivers aren't included by default for linux, and wonderful Samsung isn't going to help us out.

Some sites that have helped:

- http://www.linuxfoundation.org/collaborate/workgroups/openprinting/database/driverpackages
- http://www.linuxfoundation.org/collaborate/workgroups/openprinting/database/cupsdocumentation
- http://localhost:631/admin (for those unfamiliar, this is the local web management interface for CUPS)
- http://tldp.org/HOWTO/SMB-HOWTO.html#toc8

well, this battle is finally over.  When I started, I had a small Zonet ZPS2102 print server (that I've had for years) and it was currently set up over the network and successfully allowing windows systems (XP and 7) to print to it via a samba share.  However, as I've been moving to a truly mixed environment at home, I needed to get my linux systems printing capability.  I wanted to do this without changing where the printer was plugged in so I didn't lose printing capability in the process.  Basically, add the linux systems without affecting anything else.

After banging my head against the SAMBA/CUPS wall for several hours, I decided to try the LPD/LPR configuration on the print server.  This was after I'd visitied several sites, learned more about SAMBA than I was expecting at this point, reworked the /etc/samba/smb.conf file several times, and finally plugged the printer into my computer to see if the drivers were working.

With a successful print from a direct physical connection, I decided to try the LPD/LPR config, and it worked like a charm.  I hope this saves you some time, I know I'll certainly remember it.  So...


Total time to complete this is about 15 to 20 mins tops.  (I'll spare you the extra book I wrote while I was banging my head agianst a wall.)

Setting up a ("non"-) supported printer like the Samsung ML-2010 on linux:

WARNING... if you are on RHEL/CEntOS 5.5, you will only have LSB 3.1 installed (and no way to get to LBS 3.2 without compiling it yourself), so make sure you download the correct splix rpm for RHEL/CEntOS 5.

verify signature
[user@linuxbox Downloads]$ rpm -K splix-1.0.1-3lsb3.1.i486.rpm
splix-1.0.1-3lsb3.1.i486.rpm: sha1 md5 OK

you also need to confirm you have the rest of the tools to make this work:

# yum install foomatic ghostscript
# yum --nogpgcheck localinstall splix-1.0.1-3lsb3.1.i486.rpm


navigate to /opt/splix/ppds/Samsung/ and use gunzip to extract the file you need, the files that are installed with .gz won't work for CUPS:

# cd /opt/splix/ppds/Samsung
# gunzip -d Samsung-ML-2010-splix-en.ppd.gz

on the system you want to install the printer on, go to:
http://localhost:631/admin

Click on Administration tab
Click on Add New Printers
click on Add Printer
    under Name: => <Unique_Printer_Name> (PrintServer_Samsung_2010)
        click contiue
    select "LPD/LPR Host or Printer", click continue
    device URI: => lpd://<hostname_or_ip>/<lpd_queue_name>
         (in my case, lpd://192.168.x.x/Samsung_2010)
    Select "Browse" and browse to /opt/splix/ppds/Samsung/
         Select the file you unzipped above, click Open
    Click "Add Printer"

Click on the Printers Tab
Assign the printer as the default, and print a test page.

Happy Printing!


Extra points: REMOTE INSTALL

To install this remotely, the only extra step was to log in to the other system via ssh with X11 forwarding enabled (-X), and then running "firefox &" to get the web interface for CUPS.  Total time is still the same.

$ ssh -X -l user 192.168.x.x  (to log in)



I think it is worth noting that I acheived my goal of getting my linux systems to be able to print, but I did not complete my stated goal when I started, which was to set up the printer via SAMBA/CUPS.  However, this works just as well for me.

Take-away lesson: if there's another configuration possibility that you haven't tried and you're having to dig deep to solve what you're on, try the other route first.