Page 1 of 2 12 LastLast
Results 1 to 10 of 11
  1. #1
    75+ Posting Member Mr. Midnight's Avatar
    Join Date
    Mar 2006
    Posts
    117
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Smile several computers

    hello folks , i have been busy working and building my sim, its a slow process , in fact i realy have been getting my parts together, and slowly building my sim, right now im buiding my shell .

    my last sim i had 7 computers networked toghrther , and i had them in a closet , air cooled and dust free.

    im not much of a server guy , so this may sound lame.

    im thinking of getting a cabinet that can house several motherboards and vid cards, you know like several conputers in one big tall air cooled box.

    i will be using wideview and all the fixins.

    i dont think this is a server by definition , but it is close.

    its just and idea right now.

    the metal or wood box would be about six feet tall with a glass front door , and you would have say six or more spots for your mother boards at the top with your hard drives at the bottom.

    not for sure how i would handle the power supply.

    its just and idea like i said.

    its like taken six computers and putting them in one box.


    what you good folks think...............Robert

  2. #2
    150+ Forum Groupie XOrionFE's Avatar
    Join Date
    Jan 2008
    Location
    Sugar Grove, IL
    Posts
    264
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    My guess is that it would not be a good idea. First, you would need a power supply for each motherboard. Second, the cpus and video cards need a lot of airflow to keep them cool. The small fans on the cpu and cards are not enough which is why you have multiple computer case fans typically. Creates a lot of airflow over a confined area. I think you would be best off seperate computer cases. you could still put them in a cabinet with shelves...

  3. #3
    Boeing 777 Builder


    Kennair's Avatar
    Join Date
    Jan 2007
    Location
    Perth, Western Australia
    Posts
    730
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Hi Robert,

    I agree with Scott, seperate boxes housed in a rack. Power rails could be installed up the back of the rack and a large fan at the top to draw the hot air out combined with the individual PC fans. This is precisely how we house servers and individual PC's at my work, all in an airconditioned environment.

    KVM's can handle the multiple keyboards and mice or do it with software using something like 'Mulitiplicity'.

    I doubt whether you'd find any sort of rack that would take multiple motherboards and the associated peripherals.

    Ken.
    Opencockpits | Aerosim Solutions | Sim-Avionics | P3D | FDS | FTX | AS16 | PPL | Kennair


  4. #4
    1000+ Poster - Fantastic Contributor AndyT's Avatar
    Join Date
    Jun 2006
    Location
    Oahu, Hawaii
    Posts
    1,236
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Actually, That is almost exactly how they build pro servers, Or at least how they used to...

    Yes, you will need power supplies for each board.
    Yes, they will need lots of airflow and cooling to keep them safe and running well.
    You should be able to buy your cabinet rails online at any shop that builds rack servers.

    And if you are going to build it like this, then at the same time build it on a gigabit switch backbone. That will prevent any bandwidth issues. Be sure and use all idientical parts. Boards, vid cards, memory, power supplies,...

    As stated above, you can use a KVM switch to minimise the amount of keyboards and such.

    I had thought about doing this at one time but life interferred. I had planned to use actual server boards. They hold 2 Quad Core CPU's each and this would give you huge amounts of computing power [8 cores] on each machine. You should be able to run any pit of any complexity on this rig and never have a slowdown.

    The huge downside to this is that its going to cost you a fair chunk of loot to build it like this, but once you have that done, the rest is going to be sweet.

    This is actually a pet project of mine I still hope to complete one day.

    Let us know what you decide.

    EDIT:
    If you build this, its actually better to stand the boards on edge so that air can flow up and down thru them instead of laying them flat.
    And each hard drive should be close to the board it runs on. CD/DVD drives can all be together in one place.
    God's in command, I'm just the Pilot.
    http://www.geocities.com/andytulenko/

  5. Thanks Kennair thanked for this post
  6. #5
    1000+ Poster - Fantastic Contributor AndyT's Avatar
    Join Date
    Jun 2006
    Location
    Oahu, Hawaii
    Posts
    1,236
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Talking

    And here is an added thought for you...
    If you use motherboards with dual gigabit RJ45s then you will double the bandwidth between the machines. Each one would have 2 connections to the backbone.

    Yeah, I know its not going to pan out exactly like that for all the Supergeeks out there, but it will provide an incredible amount of throughput.

    Whichever OS you use, make sure it is 64 bit and fill your motherboards with memory. Each one should be capable of at least 16GB of RAM. Most actual server boards can run between 32GB and 128GB currently. Obviously as always, the more, the merrier. If you simply can not afford that much to start with, then make sure you put at least 2GB per core, and then add-on later as funds become available.

    The dream machine I have planned uses Quad CPU's That means I can put 4 Quad Core CPU's on each board. 16 cores per board. With 4 boards I would have enough power to fully replicate ANY pit I can possibly think of, including the Bridge of the Enterprise. That's 64 cores my friends. And if it happened not to be enough? Just add another board.

    Talk about your 'Central Computer'.
    God's in command, I'm just the Pilot.
    http://www.geocities.com/andytulenko/

  7. #6
    1000+ Poster - Fantastic Contributor AndyT's Avatar
    Join Date
    Jun 2006
    Location
    Oahu, Hawaii
    Posts
    1,236
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    And one last thought for the dreamers among us....

    Linux has a few versions out that are 64 bit and if your system is connected in the manner which I've outlined (but not in detail) you can install it on one machine and it will see ALL the boards and cores and other resources and connect them all into one massive server. Now instead of running 4 - 16 core machines, you would be running 1 - 64 core machine.

    And I think most of us know that Linux can run windows software.

    FSX will run like never before, maxed with every add-on and plenty of hardware interfaced.

    Take that to FL50...
    God's in command, I'm just the Pilot.
    http://www.geocities.com/andytulenko/

  8. #7
    Boeing 777 Builder


    Kennair's Avatar
    Join Date
    Jan 2007
    Location
    Perth, Western Australia
    Posts
    730
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Andy you're definitely on a mission and I'd really like to see that end result!

    Ken.
    Opencockpits | Aerosim Solutions | Sim-Avionics | P3D | FDS | FTX | AS16 | PPL | Kennair


  9. #8
    300+ Forum Addict Rodney's Avatar
    Join Date
    Nov 2005
    Location
    Colorado Springs, CO USA
    Posts
    332
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    The big issue is high quality graphics. I don't know of a server board that supports any type of PCI-e video slot. 99.9% use the on board video and 3d is not what a server is about. But then again I could be wrong as I have not seen anything lately that supports anything other than on board video. BTW I work at an R & D enterprise class storage shop.

    What Linux flavor are you speaking of?
    Rodney -
    Real 727-200 pit
    Last Flown as N392PA
    FS9

  10. #9
    1000+ Poster - Fantastic Contributor AndyT's Avatar
    Join Date
    Jun 2006
    Location
    Oahu, Hawaii
    Posts
    1,236
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Rodney,

    Here is a board I like for this:
    http://www.supermicro.com/products/m...7300/X7QC3.cfm

    And I've seen this done with Linux but I did not do it so I'm not sure which flavor it was, but I seem to recall it was Red Hat. Although I do recall one of the guys telling me that all Linux should work. You have to install a kernel on each machine and I'm afraid I do'nt have the rest of the procedure handy, but I think I'll go and dig it up again. This thread has brought back the dream.
    God's in command, I'm just the Pilot.
    http://www.geocities.com/andytulenko/

  11. #10
    1000+ Poster - Fantastic Contributor AndyT's Avatar
    Join Date
    Jun 2006
    Location
    Oahu, Hawaii
    Posts
    1,236
    Contribute If you enjoy reading the
    content here, click the below
    image to support MyCockpit site.
    Click Here To Contribute To Our Site

    Building a Linux Cluster
    Linux clusters are generally more common, robust, efficient and cost effective than Windows clusters. We will now look at the steps involved in building up a Linux cluster.

    Step 1
    Install a Linux distribution (I am using Red Hat 7.1 and working with two Linux boxes) on each computer in your cluster. During the installation process, assign hostnames and of course, unique IP addresses for each node in your cluster.
    Usually, one node is designated as the master node (where you'll control the cluster, write and run programs, etc.) with all the other nodes used as computational slaves. We name one of our nodes as Master and the other as Slave.
    Our cluster is private, so theoretically we could assign any valid IP address to our nodes as long as each has a unique value. I have used IP address 192.168.0.190 for the master node and 192.168.0.191 for the slave node.
    If you already have Linux installed on each node in your cluster, then you don't have to make changes to your IP addresses or hostnames unless you want to. Changes (if needed) can be made using your network configuration program Linuxconf in Red Hat.
    Finally, create identical user accounts on each node. In our case, we create the user DevArticle on each node in our cluster. You can either create the identical user accounts during installation, or you can use the adduser command as root.
    Step 2
    We now need to configure rsh on each node in our cluster. Create .rhosts files in the user and root directories. Our .rhosts files for the DevArticle users are as follows:
    Master DevArticle
    Slave DevArticle
    Moreover, the .rhosts files for root users are as follows:
    Master root
    Slave root
    Next, we create a hosts file in the /etc directory. Below is our hosts file for Master (the master node):
    192.168.0.190 Master.home.net Master
    127.0.0.1 localhost
    192.168.0.191 Slave
    Step 3
    Do not remove the 127.0.0.1 localhost line. The hosts.allow files on each node was modified by adding ALL+ as the only line in the file. This allows anyone on any node permission to connect to any other node in our private cluster. To allow root users to use rsh, I had to add the following lines to the /etc/securetty file:

    rsh, rlogin, rexec, pts/0, pts/1.
    Also, I modified the /etc/pam.d/rsh file:
    #%PAM-1.0
    # For root login to succeed here with pam_securetty, "rsh" must be
    # listed in /etc/securetty.
    auth sufficient /lib/security/pam_nologin.so
    auth optional /lib/security/pam_securetty.so
    auth sufficient /lib/security/pam_env.so
    auth sufficient /lib/security/pam_rhosts_auth.so
    account sufficient /lib/security/pam_stack.so service=system-auth
    session sufficient /lib/security/pam_stack.so service=system-auth
    Step 4
    Rsh, rlogin, Telnet and rexec are disabled in Red Hat 7.1 by default. To change this, I navigated to the /etc/xinetd.d directory and modified each of the command files (rsh, rlogin, telnet and rexec), changing the disabled = yes line to disabled = no.
    Once the changes were made to each file (and saved), I closed the editor and issued the following command: xinetd –restart -- to enable rsh, rlogin, etc.
    Step 5
    Next, download the latest version of MPICH (UNIX all flavors) to the master node from here. Untar the file in either the common user directory (the identical user you established for all nodes "DevArticle" on our cluster) or in the root directory (if you want to run the cluster as root).
    Issue the following command:
    tar zxfv mpich.tar.gz
    Change into the newly created mpich-1.2.2.3 directory. Type ./configure, and when the configuration is complete and you have a command prompt, type make.
    The make may take a few minutes, depending on the speed of your master computer. Once make has finished, add the mpich-1.2.2.3/bin and mpich-1.2.2.3/util directories to your PATH in .bash_profile or however you set your path environment statement.
    The full root paths for the MPICH bin and util directories on our master node are /root/mpich-1.2.2.3/util and /root/mpich-1.2.2.3/bin. For the DevArticle user on our cluster, /root is replaced with /home/DevArticle in the path statements. Log out and then log in to enable the modified PATH containing your MPICH directories.
    Step 6
    Next, make all of the example files and the MPE graphic files. First, navigate to the mpich-1.2.2.3/examples/basic directory and type make to make all the basic example files.
    When this process has finished, you might as well change to the mpich-1.2.2.3/mpe/contrib directory and make some additional MPE example files, especially if you want to view graphics.
    Within the mpe/contrib directory, you should see several subdirectories. The one we will be interested in is the mandel directory. Change into the mandel directory and type make to create the pmandel exec file. You are now ready to test your cluster.
    God's in command, I'm just the Pilot.
    http://www.geocities.com/andytulenko/

Page 1 of 2 12 LastLast

Similar Threads

  1. Would two computers work?
    By Elfmaze in forum Computer Hardware Setup
    Replies: 20
    Last Post: 10-24-2016, 08:41 AM
  2. Multiple Computers
    By rotorxtcy in forum Computer Hardware Setup
    Replies: 0
    Last Post: 02-10-2009, 08:40 PM
  3. two computers,one monitor
    By Jackpilot in forum General Builder Questions All Aircraft Types
    Replies: 2
    Last Post: 01-08-2009, 01:42 PM
  4. More networked computers or fewer computers?
    By Suggy in forum I/O Interfacing Hardware and Software
    Replies: 6
    Last Post: 06-10-2006, 08:24 PM
  5. More networked computers or fewer computers?
    By Darren Sugden in forum PM General Q & A
    Replies: 1
    Last Post: 02-12-2006, 06:00 PM