Week 2 walkthrough → nav Esc overview F fullscreen
Cybertex · Server+ Capstone

What We Built

A hardened small-business datacenter — from bare metal to live hypervisor — in one week.

1

HP ML350p Gen8

3

Drives in RAID 5

3 TB

Total storage

32 GB

DDR3 ECC RAM

Instructor: Anthony Pena · [email protected]

S/N 2M251705M9 · Product 736983-S01

The Mission

What we set out to build

🏢 A mini datacenter

One physical server hosting multiple virtual machines — a Windows domain controller, a Linux web stack, a database, a jump box.

🌐 A segmented network

Three zones: management, DMZ, and private LAN — isolated by firewall rules on the Proxmox host itself.

📋 Proof it works

Hardware documentation, asset tracking, a Packet Tracer diagram, and a live walkthrough on demo day.

This deck shows what we accomplished through end of Week 1 and what's coming next.

Chapter 1 · The Hardware

HP ProLiant ML350p Gen8 tower server

As delivered

ManufacturerHP
ModelProLiant ML350p Gen8
Product ID736983-S01
Serial #2M251705M9
Form factor5U Tower (rack convertible)
BIOSP72 · 08/02/2014
Chassis featuresHot-plug drives, redundant PSU, hot-plug fans
ManagementiLO 4 (dedicated RJ-45)

Why this platform

  • Enterprise-grade — hot-plug drives, ECC RAM, redundant PSUs — ideal for a 24/7 lab
  • iLO 4 lets us install and operate without touching the physical machine
  • Smart Array P420i is hardware RAID — faster and more forgiving than software RAID
  • 4× 1 GbE onboard NICs give us room to segment traffic
  • Two CPU sockets and 24 DIMM slots = headroom for future expansion
Chapter 1 · Physical Inspection

What's inside

Front panel

Front panel — Systems Insight Display, drive cage, USB, serial pull-tab, power button

System board

System board — 2 CPU sockets, 24 DIMM slots, P420i RAID, PCIe risers

DIMM layout

DIMM population chart — white slots first, matched pairs, balance across channels

Chapter 1 · Components Identified

Matched every part to a datasheet

SK Hynix RAM

RAM · 4× 8 GB

SK Hynix HMT41GR7BFR4A-PB · PC3L-12800R · DDR3L-1600 ECC RDIMM · 1.35 V

WD Blue

Drive 1 · WD Blue

WD10EZEX · 1 TB · 7200 RPM · 64 MB cache · SATA · 2018

WD RE3

Drive 2 · WD RE3

WD1002FBYS · 1 TB · 7200 RPM · 32 MB cache · SATA · 2010

HP 460W PSU

PSU · 2× 460 W 1+1

HP HSTNS-PL14 · Gold Common Slot · p/n 511777-001

Additional: 1× Xeon E5-2609 v2 (socket 1); 3rd 1 TB drive in bay 3; P420i RAID controller onboard; iLO 4 management

Chapter 1 · Discovery Summary

Final hardware spec sheet

Compute

CPU1× Xeon E5-2609 v2
Cores / threads4C / 4T (no HT)
Base clock2.50 GHz (no turbo)
L3 cache10 MB
Socket 2empty
VT-x / VT-d

Memory

Installed32 GB = 4× 8 GB
TypeDDR3L-1600 ECC RDIMM
Effective speed1333 MT/s (IMC cap)
Slots used4 of 24 · quad-channel ✓
NUMA nodes1 (single CPU)

Storage

ControllerSmart Array P420i
Firmware8.50.66.00
CacheFBWC capacitor (likely 512 MB)
Drives3× 1 TB SATA 3.5" (3 TB total)
Bay locationsPort 11, Box 1, Bays 1–3

Networking

Onboard NICHP 331i (Broadcom BCM5719)
Ports4× 1 GbE
PXE / SR-IOV /
Remote mgmtiLO 4 (dedicated RJ-45)

Power & Cooling

PSUs2× 460 W 1+1
ModelHP HSTNS-PL14 Gold
Input100–240 V · 50/60 Hz
Fans4 hot-plug redundant

Firmware

BIOSP72 · 08/02/2014
Bootblock03/05/2013
SATA Option ROMv2.00.CO2 (2011)
NIC Boot AgentBroadcom NetXtreme (2014)
Boot modeLegacy
Chapter 2 · Architecture

The network we designed

School LAN 10.10.0.0/16 GW 10.10.10.1 Proxmox Host vmbr0 · vmbr1 · vmbr2 iptables NAT/PAT routes between zones DMZ / Jump 172.16.0.0/24 Web · Jump Box Private LAN 192.168.0.0/24 AD · SQL · Files

vmbr0 · Management + School LAN

10.10.0.0/16 · GW 10.10.10.1 · iLO + Proxmox UI + updates

vmbr1 · DMZ

172.16.0.0/24 · public-facing · Web + Jump box

vmbr2 · Private LAN

192.168.0.0/24 · trusted · AD + DNS + DHCP + SQL

Chapter 2 · Storage Decision

Why we chose RAID 5

Options considered

LevelUsableProtectionVerdict
RAID 03 TBNoneRejected
RAID 1 (mirror)1 TB1 failToo little space
RAID 53 TB1 failChosen ✓
RAID 61 TB2 failsNeeds 4 drives

Rationale

  • 3 drives = RAID 5 is the natural fit — minimum met, best space efficiency (66%)
  • Survives 1 drive failure without data loss — matters with our mixed-age drive pool
  • 3 TB total comfortably hosts 4 planned VMs plus ISOs + snapshots + backups
  • Hardware RAID on P420i is faster than ZFS on this generation — and doesn't need HBA-mode flashing

⚠ Caveat: drives are mismatched — WD Blue (2018, consumer, no TLER) + WD RE3 (2010) + a 3rd. Operate at the slowest member's speed; the oldest drive is statistically most likely to fail first. Weekly SMART checks recommended.

Chapter 2 · Execution

Built the RAID 5 array in ORCA

ORCA RAID 5 selection

3 drives marked [X] · RAID 5 selected · Max Boot Partition disabled (4 GB)

What we pressed

  1. 1Rebooted, watched POST for the Smart Array banner
  2. 2Pressed F8 to enter ORCA (Option ROM Configuration for Arrays)
  3. 3Chose Create Logical Drive
  4. 4Pressed Space on each of 3 drives at Port 11, Box 1, Bays 1–3
  5. 5Selected RAID 5 · defaults: 256 KiB stripe, Accelerator Enable
  6. 6Pressed Enter to commit, F8 to save
  7. 7Exited and rebooted

Result: 1 logical drive · 3 TB total (one drive's capacity reserved for parity) · parity initialization began in the background.

Chapter 2 · Verification

POST confirmed the array

ORCA main menu with View Logical Drive enabled

↑ ORCA main menu after creation — "View Logical Drive" is now selectable, which means an array exists ✓

POST showing Non-System disk error

↑ Next boot: "Non-System disk" — exactly what we wanted. Server sees the array as C:, just no OS yet. The 1785 "array not configured" error is gone.

Chapter 3 · Hypervisor Install

Booted Proxmox VE 8.2.2 via iLO virtual media

F11 boot override menu

F11 at POST → option 1 (CD-ROM) — iLO serves the ISO as a virtual CD

Proxmox VE boot menu

Proxmox installer loaded — picked "Install Proxmox VE (Graphical)"

Proxmox network config

Management network configured on eno1

No USB stick needed — iLO's virtual media mounted the ISO straight from a laptop browser.

Chapter 3 · Install Summary

Final installer summary — then clicked Install

Proxmox installer Summary

What we installed with

Filesystemext4 on /dev/sda
Target disk3 TB RAID 5 array
Country / TZUnited States / America/Chicago
Management NICeno1
Hostnametctmachine
IP (CIDR)10.10.10.10/16
Gateway10.10.10.1
DNS1.1.1.1 (Cloudflare)

Install ran ~5 minutes, unmounted the virtual ISO, rebooted — Proxmox came up on the first try.

Chapter 3 · Proof of Life

Web UI reached from a browser

Proxmox VE 8.2.2 web UI showing tctmachine node with local and local-lvm storage

https://10.10.10.10:8006 from a laptop on the school LAN. Logged in as root.

What the dashboard confirmed

  • Proxmox VE 8.2.2 running
  • Node tctmachine online under Datacenter
  • localnetwork SDN exists
  • local storage mounted · 2.7% used (base system)
  • local-lvm thin pool ready · 0.0% used (awaiting VMs)
  • Authentication, TLS, web stack all functional

What this means

Every layer from iLO up through Proxmox UI is working. The 3 TB RAID 5 array is exposed as local (host root + ISOs) and local-lvm (VM disks). Ready to create bridges and VMs next.

Where We Are Today

Proxmox VE 8.2.2 is live

Week 1 ✓ complete

  • Every component identified and documented
  • iLO 4 configured for remote management
  • RAID 5 array built on 3× 1 TB SATA (3 TB total)
  • Proxmox VE 8.2.2 installed on the array
  • Management IP 10.10.10.10/16 on eno1
  • Web UI live at https://10.10.10.10:8006
  • Internet reachable (DNS 1.1.1.1)

Proof — screenshots captured

  • POST banner with RAID 5 present + 1785 error cleared
  • ORCA main menu showing the logical drive
  • Proxmox installer summary
  • Proxmox login / dashboard

Known follow-ups

  • Rename host → pve01.capstone.local (hostnamectl)
  • Run apt update && full-upgrade
  • Enable no-subscription repo
  • Record FBWC cache size + capacitor status
  • Weekly SMART check on all 3 drives (WD RE3 is 15 yrs old)
Week 2 · Services Week

What we're deploying and where

🎯 Objective

Stand up the core services every downstream week depends on — Windows DNS/DHCP/IIS + Linux NGINX/MariaDB — on a NAT-bridged network.

The 4 phases

  • Phase 0 · 🌐 verify NAT bridge + Jump Box + internet
  • Phase 1 · 🪟 Windows DNS · DHCP · IIS welcome page
  • Phase 2 · 🐧 Linux NGINX · MariaDB capstone_db
  • Phase 3 · 🌐 cross-VM ping, nslookup, DHCP lease tests

Week 2 IP plan · 3 bridges live

BridgeSubnetHost IP
vmbr0 mgmt10.10.0.0/1610.10.10.10
vmbr1 DMZ172.16.0.0/24172.16.0.10
vmbr2 LAN192.168.0.0/24192.168.0.10

VM static IPs

Windows Server192.168.0.2 (vmbr2)
Linux Server192.168.0.3 (vmbr2)
DMZ Web / Jump172.16.0.10–20 (vmbr1)
DHCP scope192.168.0.50 – .100
Phase A /etc/network/interfaces — what we wrote

Three bridges on the Proxmox host

/etc/network/interfaces with typo fixed, all three bridges valid

✓ Typo fixed — iface vmbr2 inet static. Proxmox GUI shows vmbr0/1/2 all Active=Yes, Autostart=Yes.

Correct config

auto vmbr0
iface vmbr0 inet static
    address 10.10.10.10/16
    gateway 10.10.10.1
    bridge-ports eno1
    bridge-stp off
    bridge-fd 0

auto vmbr1
iface vmbr1 inet static
    address 172.16.0.10
    netmask 255.255.255.0
    bridge-ports none
    bridge-stp off
    bridge-fd 0

auto vmbr2
iface vmbr2 inet static
    address 192.168.0.10
    netmask 255.255.255.0
    bridge-ports none
    bridge-stp off
    bridge-fd 0

After saving: ifreload -a — verify with ip addr show vmbr1 and ip addr show vmbr2.

Phase 0 🌐 Networking Specialist · verification

Verify bridges & internet

Ping tests (from Proxmox host)

# self-tests — bridges are up?
ping -c 2 172.16.0.10       # vmbr1 self
ping -c 2 192.168.0.10      # vmbr2 self

# once VMs exist:
ping -c 4 192.168.0.2       # Win VM (vmbr2)
ping -c 4 192.168.0.3       # Linux VM (vmbr2)
ping -c 4 172.16.0.10       # DMZ Jump (vmbr1)
From → ToRecord
Host → Win VM___ ms
Host → Linux VM___ ms
Host → DMZ___ ms

Outbound NAT (to internet)

Windows VM cmd:

ping 8.8.8.8
expect < 30 ms

Linux VM bash:

curl https://ifconfig.me
→ returns school public IP

If fails: confirm IP forwarding + iptables MASQUERADE on vmbr0 egress, VMs have correct gateway (192.168.0.10 for LAN, 172.16.0.10 for DMZ).

Phase 1 🪟 Windows · DNS + DHCP

Windows Server roles: DNS + DHCP

Install DNS + create zone

  1. Server Manager → Add Roles → DNS Server
  2. Tools → DNS → right-click Forward Lookup Zones → New Zone
  3. Primary zone · name teamx.local
  4. Right-click zone → New Host (A):
    name winserver · IP 192.168.0.2 · ✓ create PTR
nslookup winserver.teamx.local
Address: 192.168.0.2

Install DHCP + create scope

  1. Server Manager → Add Roles → DHCP Server → complete wizard
  2. Tools → DHCP → IPv4 → right-click → New Scope
NameCapstoneScope
Range192.168.0.10 – .100
Mask255.255.255.0
Gateway192.168.0.1
DNS192.168.0.2
Suffixteamx.local

Right-click scope → Activate. Test from a client: ipconfig /release && /renew.

Phase 1 🪟 Windows · IIS

IIS — publish the welcome page

Install + deploy

  1. Server Manager → Add Roles → Web Server (IIS)
  2. File Explorer → C:\inetpub\wwwroot\
  3. Delete iisstart.htm + iisstart.png
  4. Right-click → New → Text Document → paste HTML →
  5. Save As… → type All Filesindex.html
<html>
  <body>
    <h1>Welcome to Week 2!</h1>
  </body>
</html>

Test from a client

# by IP
http://192.168.0.2

# by DNS hostname
http://winserver.teamx.local

Both should render Welcome to Week 2! as an H1.

📸 Screenshot the browser showing the welcome page — URL bar must be visible.

Troubleshoot: Windows Firewall → allow HTTP (port 80) if the page can't be reached remotely.

Phase 2 🐧 Linux · NGINX

NGINX — install & publish

Install & enable

sudo apt update
sudo apt install nginx -y

sudo systemctl enable nginx
sudo systemctl start nginx

sudo systemctl status nginx
● nginx.service - active (running)

Deploy the page

echo "<h1>Welcome to Linux Week 2</h1>" \
  | sudo tee /var/www/html/index.html

cat /var/www/html/index.html

Test from another VM

# from Windows or Jump Box browser
http://192.168.0.3

📸 Screenshot the browser showing the rendered heading.

If blocked: sudo ufw allow 80/tcp · verify IP is .3 · gateway .1.

Phase 2 🐧 Linux · MariaDB

MariaDB — create the capstone database

Install + launch

sudo apt install mariadb-server -y
sudo systemctl enable mariadb
sudo systemctl start mariadb

# optional hardening
sudo mysql_secure_installation

sudo mysql

Create DB + user + grant

CREATE DATABASE capstone_db;

CREATE USER 'capuser'@'localhost'
  IDENTIFIED BY 'securepass';

GRANT ALL PRIVILEGES ON capstone_db.*
  TO 'capuser'@'localhost';

FLUSH PRIVILEGES;
EXIT;

Verify

mysql -u capuser -p -e "SHOW DATABASES;"
| capstone_db        |

📸 Screenshot the SHOW DATABASES; output with capstone_db visible.

Phase 3 🌐 Cross-VM Tests + Report

Verify everything end-to-end · write the report

Ping both directions

# Win cmd
ping 192.168.0.3

# Linux bash
ping -c 4 192.168.0.2

DNS lookup

nslookup \
  winserver.teamx.local
→ 192.168.0.2

DHCP lease

New client VM → DHCP → record the leased IP from Win DHCP Manager.

📸 5 mandatory screenshots

  1. DNS zone + A record
  2. DHCP scope + active lease
  3. IIS page in browser
  4. NGINX page in browser
  5. DB CLI: SHOW DATABASES;

📄 Week 2 Report

  • Cover: week, team, roles
  • Phase 0 tables filled
  • Test summary table w/ Pass/Fail
  • 5 screenshots (above)
  • Reflection ×3 (trickiest test, longest service, unresolved issues)
  • Asset tracker export (VMs · IPs · roles)

Full walkthrough: week2.html

Week 1–5 Guide Instructor's master config (2026-04-23)

Jump Box, DNAT port-forward, reverse routes

🛡 Jump Box VM — on vmbr1

  • Ubuntu Server · 2 vCPU · 2 GB · 25 GB
  • Static 172.16.0.x, gw 172.16.0.1
  • Hardened SSH: non-root user, PermitRootLogin no
  • ufw allow 22/tcp from 10.10.10.0/24, 172.16.0.0/24, 192.168.0.0/24
  • DNS for all internal VMs = Windows Server IP

🔁 Reverse route — on internal VMs

# Windows (admin PowerShell)
route add 172.16.0.0 mask 255.255.255.0 192.168.0.1

# Linux
sudo ip route add 172.16.0.0/24 via 192.168.0.1

🔀 Host iptables — DNAT + persistence

# SSH port-forward: WAN :2222 → Jump Box :22
iptables -t nat -A PREROUTING -i vmbr0 \
  -p tcp --dport 2222 \
  -j DNAT --to-destination 192.168.0.2:22

iptables -A FORWARD -p tcp -d 192.168.0.0/24 \
  --dport 22 -j ACCEPT
iptables -A INPUT  -p tcp --dport 2222 -j ACCEPT

# Persist across reboots
apt install iptables-persistent
netfilter-persistent save
# or:
iptables-save > /etc/iptables/rules.v4

MASQUERADE for 172.16.0.0/24 and 192.168.0.0/24 out vmbr0 already set in /etc/network/interfaces as post-up lines.

Test: from laptop on school LAN → ssh -p 2222 [email protected] → lands on Jump Box at 192.168.0.2.

Phase B+ Building VM 101 · jumpbox

Jump Box install — values that worked

Proxmox Create VM

VM ID101
Namejumpbox
ISOubuntu-24.04.1-live-server-amd64.iso
Disk25 GB on local-lvm
CPU1 socket × 2 cores, type host
RAM2048 MB
Bridgevmbr1 (DMZ), VirtIO
ISO list before Ubuntu Server download

ISO Images storage view (Ubuntu Desktop was already there; Server downloaded next)

Ubuntu installer · Network

MethodManual
Subnet172.16.0.0/24
Address172.16.0.2
Gateway172.16.0.10 (Proxmox host)
DNS1.1.1.1, 8.8.8.8

Profile

Server namejumpbox
Usernamejumpadmin
SSH✓ Install OpenSSH server

⚠ Ubuntu installer's autoconfig fails on vmbr1 (no DHCP) — that's expected. Set static manually.

Beyond Week 2

Weeks 3 & 4

W3

Services depth

  • Promote Windows → AD DC (capstone.local)
  • Move DNS + DHCP under AD integration
  • Install SQL Server Express (Windows) / confirm MariaDB connected
  • Create file shares with NTFS permissions
  • Ship a baseline GPO (password policy, screen lock)
  • Configure Windows Backup
W4

Harden & demo

  • Tighten firewall rules: default-deny + explicit allow
  • Run vulnerability scan, fix findings
  • Test a real backup restore — not just "it ran"
  • Break something, restore it, document the recovery
  • Rehearse the demo walkthrough twice
  • Sealed runbook with every password + recovery plan
Planned VM Layout

4 VMs sized for 32 GB / 4 cores

VMPurposeOSvCPURAMDiskBridgeIP
VM1AD / DNS / DHCP / IISWindows Server 201928 GB80 GBvmbr2192.168.0.10
VM2SQL Server ExpressWindows Server 201926 GB80 GBvmbr2192.168.0.20
VM3Apache + MySQL webUbuntu 22.04 LTS24 GB40 GBvmbr1172.16.0.10
VM4Jump box / gatewayWindows / Ubuntu12 GB30 GBvmbr1172.16.0.20
Totals720 GB230 GBLeaves ~10 GB RAM for host + ARC

⚠ No Hyper-Threading on E5-2609 v2 — 7 vCPU on 4 real cores = ~1.75× over-subscription, acceptable for lightly-loaded services. Keep CPU-heavy workloads off the same host.

Deliverables

What the instructor gets

Documentation

  • Week 1 report form — all fields filled with real values
  • Server Hardware Discovery Sheet — 12 sections, completed
  • IT Asset Tracking spreadsheet — Vendors · Hardware · Software tabs
  • Network diagram (Cisco Packet Tracer)
  • This slide deck — presentation for demo day

Live systems

  • Proxmox VE host reachable at https://10.10.10.10:8006
  • iLO 4 remote management on dedicated RJ-45
  • RAID 5 healthy with active parity protection
  • Coming: running AD / DNS / SQL / web VMs
  • Coming: firewall-enforced zone separation
Lessons Learned

Observations worth calling out

Hardware

  • HP POST uses different keys than Dell — F8/F9/F10/F11, not F2/F12
  • E5-2609 v2 has no Hyper-Threading and no Turbo — 4 cores is the ceiling
  • 1 CPU means half the DIMM slots and PCIe risers are inactive
  • 4× DIMMs across 4 channels = optimal quad-channel balance for 1 CPU ✓

Process

  • iLO 4 virtual media removed the need for USB installers — huge time saver
  • 1785-Drive Array Not Configured is expected before RAID is built — not an error
  • RAID 5 needs ≥3 drives — the controller greys it out below minimum
  • Proxmox accepts single-label hostnames but FQDN pve01.capstone.local is the convention
  • Drive mismatch (consumer WD Blue + 2010 WD RE3) is a reliability risk — plan to replace before production
Quick Reference

Cheat sheet — the things we'll forget

Boot keys (HP ML350p Gen8)

F9System Utilities / RBSU (BIOS)
F10Intelligent Provisioning (HP installer)
F11One-time boot menu
F8Smart Array ORCA (RAID config)

Key URLs

iLO 4https://<iLO-IP>
Proxmox UIhttps://10.10.10.10:8006

IP plan

ZoneSubnetGateway
Mgmt (vmbr0)10.10.0.0/1610.10.10.1
DMZ (vmbr1)172.16.0.0/24host
LAN (vmbr2)192.168.0.0/24192.168.0.1

POST error codes

  • 1785 drive array not configured → build the array
  • 1779 capacitor charging → wait ~5 min
  • 1794 battery < 75% capacity → replace soon
  • 1797 battery failure → replace now
Thank You

Questions?

Live server: https://10.10.10.10:8006

HP ProLiant ML350p Gen8 · 1× Xeon E5-2609 v2 · 32 GB DDR3 ECC · RAID 5 on 3× 1 TB SATA (3 TB total)
Proxmox VE 8.2.2 · 2-bridge iptables topology · Cybertex Austin

✓ Week 1 done Week 2: Virtualize Week 3: Services Week 4: Demo

All Slides (click to jump · Esc to close)

1 / 28