Hp Dl360 G7 Memory Slots
From Bicom Systems Wiki
HP 627814-B21 32gb (1x32gb) 1066mhz Pc3-8500 Cl7 Ecc Registered Quad Rank X4 1.35v Low Voltage Ddr3 Sdram 240-pin Dimm Genuine Hp Memory For Hp Proliant Server Bl460c G7. The Compaq HP ProLiant DL360 G7 needs a specific type of memory module to function properly. For reference, we have listed the technical specs for your ProLiant DL360 G7 to the left. If you used the Memory Finder on Upgrade Memory, the memory you see here is 100% guaranteed to work in your Compaq HP ProLiant DL360 G7 or you will receive a full. HP ProLiant DL360 G7 Server - Configuring Memory. (18) total slots. Memory channel 1 consists of the three (3) DIMMs that are closest to the processor. The HP ProLiant DL360 G7 has 18 DIMM slots and can use up to 288GB of 10600R memory modules. Memory for the DL360 G7 can be either unbuffered (UDIMMs) or registered (RDIMMs) modules. Only half of the memory slots can be used if the system only uses one CPU.
- 2Hardware Requirements
- 3Deployment Guide
- 4Installation wizard steps
- 5Setup wizard steps
Introduction
The following guide describes minimal and recommended hardware requirements as well as procedures for a successful deployment and maintenance of HP ProLiant DL360 G7 in the SERVERware 3 environment.
Hardware Requirements
Server requirements
HP ProLiant DL360 G7 with the following:
HARDWARE | MINIMUM SYSTEM REQUIREMENTS | RECOMMENDED SYSTEM REQUIREMENTS |
---|---|---|
CPU | 2.4 GHz Quad-Core Intel Xeon Processor with 12MB Cache - E5620 | 2.93 GHz Hex-Core Intel Xeon Processor with 12MB Cache - X5670 |
RAM | 16GB Memory (4x4GB) PC3-10600R | 32GB Memory (4x8GB) PC3-10600R |
Ethernet | 2 network interfaces ( for SERVERware Mirror edition ), 3 network interfaces ( for SERVERware Cluster edition ) | 4 network interfaces ( for SERVERware Mirror edition ), 6 network interfaces ( for SERVERware Cluster edition ) |
Controller | Smart Array P410i 512MB BBWC 0-60 RAID | Smart Array P410i 512MB BBWC 0-60 RAID |
Disk | 2 x 100GB 15K RPM SAS 2.5' HP Hard Disk Drive ( for system ), 2 x 300GB 10K RPM SAS 2.5' HP Hard Disk Drive ( for storage ) | 2 x 100GB 15K RPM SAS 2.5' HP Hard Disk Drive ( for system ), 4 x 600GB 10K RPM SAS 2.5' HP Hard Disk Drive ( for storage ) |
PSU | Redundant HP 460 Watt PSU | Redundant HP 750 Watt PSU |
IMPORTANT NOTE: Software RAID (including motherboard) implementations are not supported and could cause potential problems that Bicom Systems would not be able to support.
KVMoIP (Keyboard, Video, Mouse over IP) remote access to each server
- Remote power management support (remote reset/power off/on)
- Remote access to BIOS
- Remote SSH console access
- A public IP assigned to KVMoIP
- If KVMoIP is behind firewall/NAT, the following ports must be opened: TCP (80, 443, 5100, 5900-5999)
SERVERware installation media is required in one of the following forms
- DVD image burned on to a DVD and inserted into DVD drive
- USB image burned onto 2GB USB drive and inserted into operational USB port
Deployment Guide
Network setup
How-to instructions regarding network setup are available on the following page:
RAID Controller setup
- - Press F8 during POST to enter RAID setup configuration utility
- - Create logical drive (RAID 1) for 2 100GB drives (for system)
- - Create logical drive (RAID 0) for each still available drive (for storage)
- - Exit RAID setup configuration utility
Creation of USB installation media
USB images and instructions are available on the following how-to page:
Boot target machine from the USB installation media. The following welcome screen appears.
If the live system was able to pickup IP address from DHCP server, it will show so on this screen. You can then access the system remotely via ssh on port 2020 with username 'root' and password 'bicomsystems' and continue installation. There are several options offered on the welcome screen:
- Exit - Choose this option to exit installation wizard. This option will open live system command line shell.
- Verify Media - This option will go trough the installation media files, match them against previous stored checksum and check for corruption.
- Next - Proceed to the next step.
- Network - Configure IP address for the remote access to the installation wizard.
Step 1:
Select the type of installation you want to install, Storage/Controller or Host (Processing or Backup).
Storage/Controller is the network storage for VPSs on the SERVERware network. In order to use mirrored setup, you will have to install two physical machines as Storage/Controller. Processing Host is computation resource that will attach and execute VPSs from storage via SAN (Storage Area Network).
Step 2:
Installation wizard will proceed to check for available disks.
Step 3:
Select physical disk for system installation. Volume for storage will be automatically created from disks that are not selected in this step.
Step 4:
Confirmation dialog appears.
Step 5:
Installation wizard will now proceed with installation of SERVERware operating system.
Step 6:
After OS is installed, configure network dialog appears.
Select Create virtual bonding network interface to proceed to network interface creation.
From the list of available network interfaces, select two interfaces for bonding interface creation.
From the list of available modes, (select agregattion to suit your network configuration eg. 802.3ad), for bonding interface creation.
Enter a name for a new bonding interface.
Click Next to continue and then chose one of the options to configure new bonding interface.
After finishing network configuration, click next to finish installation. Wizard will initiate reboot.
Step 7:
Redo installation steps for the second (mirrored) machine.
Open your browser and enter bondLAN IP you configured upon installation. After confirming self signed certificate, SERVERware setup wizard login screen appears. Enter administration password which is by default 'serverware'.
Step 1:
After successful login, SERVERware EULA appears.
Acceptance of the EULA leads to the next step.
Step 2:
Enter your license number, administrator's email, set new administrator's SERVERware GUI password. This password will also apply to shell root account. Select your time zone and click next to continue.
Step 3:
Depending on the license you acquired, this step will offer you to configure LAN, SAN and mirror RAN network. LAN is a local network for SERVERware management and service provision. SAN is a network dedicated to connecting SERVERware storage with processing hosts. RAN is a network dedicated to real time mirroring of two servers.
Before proceeding with the network configuration, use command line interface utility netsetup to create virtual bonding network interfaces bondRAN and bondSAN. Also, before proceeding to the next step, use command line interface utility netsetup to create and setup virtual bonding network interfaces bondRAN and bondSAN on the second (mirrored) machine.
Setup wizard will suggest default configuration for the network interfaces. The machines must be on same LAN network and the same SAN and RAN network. Modify network configuration if needed and click Next to proceed to the next step.
Step 4:
Choose the name for the cluster if you don't like the one generated by the SERVERware (it must be valid per hostname constrains).
Select from the list or enter the LAN IP address of the second (mirrored) machine. Purpose of mirrored setup is to provide storage redundancy, so it will need a few more configuration parameters. LAN Virtual IP is a floating IP address for accessing mirrored storage server. SAN Virtual IP is floating IP address used for access to the storage. Administration UI IP address will be used for CONTROLLER VPS (GUI).
CONTROLLER VPS is setup automatically on the storage server, and its purpose is to provide administrative managing web console as well as to control and monitor SERVERware hosts and VPSs.
Once you click Finish button, wizard will initialise and setup the network storage. When complete, setup will present the summary info.
Wait a few seconds for CONTROLLER to start and click on the Controller Web Console link to start using SERVERware and creating VPSs.
Storage Expansion and Replacement
Replacing one damaged HDD from RAID 1 on secondary server HP ProLiant DL360 G7 while system is running
When one of the HDD's from mirror is damaged, next procedure should be followed:
Setup :We have 2 mirrored storage servers, each server have 2 HDD‘s in RAID 1 (Mirror) for storage.
For this procedure, first we are going check the status of the Smart Array.
Connect trough ssh to storage server with faulty HDD and using HP utility hpacucli, execute the following:
As we can see from output one of the HDD’s have 'Failed' status.
Our next step is to remove the faulty HDD from the bay-6 and insert new HDD in bay-6.
After replacement we can use the same hpacucli command to check the status of the newly installed HDD.
Now we can see that HP Raid controller is rebuilding data on the replaced HDD,
and the status of logicaldrive 3 (931.5 GB, RAID 1, Recovering, 0% complete).
This can take a while to complete depending on a storage size.
After completion, HDD should have status 'OK'.
This is the end of our replacement procedure.
Replacing one damaged HDD from RAID 0 on secondary server HP ProLiant DL360 G7 while system is running
When one of the HDD's from mirror is damaged, next procedure should be followed:
Setup :We have 2 mirrored storage servers.Each server have 2 HDD‘s, each of them setup as RAID 0.
For this procedure, first we are going check the status of the Smart Array.
Connect trough ssh to storage server with faulty HDD and using HP utility hpacucli, execute the following:
This is output of the hpacucli command when everything is ok.
If one of the drives fail, the output would be:
If HDD fails, zpool will be in the state: DEGRADED on the primary server.
Next, we should physically replace the damaged HDD in the server bay 5 and run hpacucli again.
From output we can see that the status of the physicaldrive is OK but status of logicaldrive 3 is Failed.
We need to delete failed logical drive from smart array (in this case logicaldrive 3) and recreate it again with new drive.
We can do this using hpacucli command.
Checking the status again.
New physicaldrive is now unassigned.Create logicaldrive 3 in RAID 0 using new drive in bay 5.To create logical drive use hpacucli command:
Command explaination:
We have created logicaldrive 3 in RAID 0 configuration, using disk 2I:1:5 in bay 5.
Checking the status again.
To find out more detailed info on logicaldrive 3 and which block device name system has assigned to the logicaldrive 3, use the following hpacucli command:
When we have block device name (Disk Name: /dev/sdd), we need to make a partition table for the newly installed disk.
To make a partition table use parted:
Create the partition with the name to match our faulty partition on the primary server. We have this name from above:
SW3-NETSTOR-SRV2-1 FAULTED 3 0 0 too many errors
Our command in this case will be:
We have now added a new partition and a label. Now we need to edit mirror configuration file: /etc/tgt/mirror/SW3-NETSTOR-SRV2.conf
IMPORTANT:Before we can edit configuration file, we need to logout iSCSI session on the primary server.
Connect trough ssh to the primary server and use iscsiadm command to logout:
Example:
Now we can proceed editing configuration file on the secondary server.
Source of the file looks like this:
We need to replace iscsi-id to match the ID of the changed HDD.
To see the new ID use this command:
Now edit configuration file.
Replace ID with the new one and save file.
Next, we need to update the target from the configuration file we have just created.
To update tgt-admin info from configuration file, we will use following command:
This ends our procedure on the secondary server.
Next, on the primary server, add newly created virtual disk to the zfs pool.
First we need to login to the iscsi session we have exported before:
Example:
After this we can see zpool status:
From the output we can see:
SW3-NETSTOR-SRV2-1 FAULTED status of secondary HDD
Now we need to change guid of old HDD to guid of new HDD, so that zpool can identify new HDD.
To change guid from old to new in zpool, first we need to find out new guid.
We can use zdb command to find out:
The important line for from zdb output:
The guid part need to be updated to zpool.
We can update guid with the command:
Example:
Now check zpool status:
You need to wait for zpool to finish resilvering.
This ends our replacement procedure.
Expanding storage with 2 new HDD’s in RAID 1 on HP ProLiant DL360 G7 while system is running
Insert 2 new HDD’s in storage server empty bays.
Connect trough ssh to the server and using hpacucli create new logical drive and add new HDD’s to logical drive.
Use the following command to view configuration:
We can see from the output that new HDD’s appear under unassigned.
We need to create logical drive in RAID 1 from unassigned HDD’s.
To create logicaldrive use this command:
And again check the status:
Now we have new logical drive logicaldrive 4 (931.5 GB, RAID 1, OK).
After creating new logical drive we need to make a partition table on a new drive.
We need to find out which block device name system has assigned to the logicaldrive 4.
To find out type:
Use parted to make partition table for new a logical drive.
And create a new label.
IMPORTANT: label must be formated as before SW3-NETSTOR-SRV2-2
Name of the label “SRV2” comes from the server number and “-2” is the second virtual drive:
1. SW3-NETSTOR-SRV2 - this means virtual disk on SERVER 2
2. -2 - this is the number of the virtual disk (virtual disk 2)
Now add a label to a new drive.
We have to update configuration file for SERVERware to know what block device to use.
We can get this informations by listing devices by-id:
Now copy disk ID scsi-3600508b1001cad552daf39b1039ea46a and edit configuration file:
Our case:
File should look like this:
Add one more <target> to a configuration file, to include new <target>:
<target SW3-NETSTOR-SRV2-2>
and ID:
<direct-store /dev/disk/by-id/scsi-3600508b1001cad552daf39b1039ea46a>
After editing, the configuration file should look like this:
Save file and exit.
We need to edit one more configuration file to add a location of the secondary HDD:
Add a new storage name comma separated after existing storage name.
Edit file and apply the change to look like this:
Save file and exit.
Hp Dl360 G7 Memory Configuration
Now repeat all these steps on another server.
After all these steps are done on both servers, we need to link storage from the secondary server to the zpool on the primary server.
Connect trough ssh to the secondary server and export target to iSCSI using tgt-admin tool:
This ends our procedure on the secondary server.
Connect trough ssh to the primary server and use iscsiadm discovery to find new logical disk we have exported on the secondary server.
Hp Dl380 G7 Memory Configuration
First, find out network address of the secondary storage server:
In output we will get information we need for the discovery command,192.168.1.46:3259 (IP address and port).
Now using iscsiadm discovery, find new logical drive:
Now login to the exported iSCSI session.Use this command:
To see newly added logical drive use:
Now we need to expand our pool with new logical drives.You need to be careful with this command. Check names of logical drives to make sure you got the right name.
Now in the zpool, we should see newly added logical volume:
This is the end of our storage expansion procedure.
Combining concentrated 1U compute power, HP Insight Control, and essential fault tolerance, the HP ProLiant DL360 G7 is optimized for space constrained installations. Latest Intel Xeon 5600 Series Processors (6 and 4 core), with choice of DDR3 Registered or Unbuffered DIMMs, integrated Smart Array with support up to eight SAS/SATA/SSD drives and dual PCI Express Gen2 technology provide a high performance system, ideal for the full range of scale out applications.
Hp Dl360 G7 Memory Configuration Tool
Please note that the specifications may vary by model. You can check the specific information in the Details section or on the manufacturer’s website.
Hp Dl380 G7 Ram Slots
- More Performance, Capacity and Efficiency in Less SpaceWith the addition of two Intel Xeon Processors X5687, X5690 via CTO, the DL360 G7 offers a new level of high performance, power efficiency, and adaptability with Intel QuickPath, Integrated Memory Controller, Turbo Boost, Intelligent Power Technologies and Trusted Execution Technology. Memory intensive applications benefit from memory buffers, faster memory speeds, 4:1 memory interleaving, online spares and larger memory capacity. The DL360 G7 continues to provide the best in efficiency with Data Center Smart Grid Technologies like the Sea of Sensors, Dynamic Power Capping and High Efficiency/Right-Sized power supplies. It provides more performance/efficiency in the 1U space and more performance in the constrained environment.
- Smart, Flexible Systems Ready for Complex, Dynamic EnvironmentsInsight Control is an essential infrastructure management that speeds server deployment, proactively manages health, optimizes power management, and offers control from anywhere – available standard on Performance models. iLO 3 allows powerful, hardware-based remote administration and control from a standard web browser to conserve valuable IT staff resources. Server configuration and deployment is simplified with tools like SmartStart, Rapid Deployment Pack, PXE, and ROM-Based Setup Utility (RBSU). Options are provided for integrated VMware, MS HyperV or Citrix XenServer virtualization technologies, through internal USB and SD slot. A rugged slide-out system diagnostics display is designed to save administrators time with easy to find trouble shooting information at the front of the server.
- Energy Efficiency LeadershipHP unique Data Center Smart Grid Technologies provide intelligent automatic power and cooling with “Sea of Sensors” that manage and adjust power and cooling throughout the server to provide continuous peak performance and efficient cooling throughout the data center. Reduce, Reclaim, Recover. Improve your capacity up to 3X with HP Dynamic Power Capping and Insight Control. Dynamic Power Capping allows you to cap the servers power use level, without impacting performance and still fully protecting the system Circuit Breakers. Delivering Common Slot/Right-Sized Gold and Platinum power supplies satisfies your needs with the highest efficiency in the industry, as seen by meeting Climate Savers Computing Gold, 80PLUS Gold and Platinum, and setting the standards for Energy Star for Servers compliance.
- Simplified Server ManagementClean, tool-free, mechanical design enhances reliability and simplifies configuration and maintenance with tool-free, modular components, hot plug redundancy features for fast maintenance, and minimal cables for easy access to components. With universal tool-free sliding rails the DL360 G7 is quick to install and quick release levers allow for fast server access. An ambidextrous cable management arm option provides cabling flexibility and management for quick access to the server. The focus on Server Commonality increases IT productivity and manageability with universal drives, Smart Array Controllers, and common power supplies and components to simplify spare parts inventories. ROM based configuration and management features increase uptime and simplify configuration and independent health monitoring.