4 weeks. 1 blade server. Full enterprise stack. Here's everything you need to understand, plan, and ship the project โ explained with diagrams, step-by-step walkthroughs, and checklists you can tick off as you go.
New to servers? This section uses analogies you already know. By the end you'll understand what you're actually building before touching any real config.
Demo-ready decks built from this project's actual screenshots and configs. Open any of them โ they're full keyboard-navigable presentations (โ โ to navigate, Esc for overview, F for fullscreen).
16 slides โ bare metal to live Proxmox VE 8.2.2. Hardware specs (HP ML350p Gen8, 32 GB ECC, 3 TB RAID 5), step-by-step build, ISO upload.
Bridges, Jump Box, internal VMs. Windows DNS / DHCP / IIS ยท Linux NGINX / MariaDB.
The full capstone story end-to-end. Hardware โ hypervisor โ services โ security โ demo day.
That one big metal box in the rack isn't really "a computer running one thing." Modern servers run many separate "tenants" at once. Here's the mental model:
One physical box with lots of power (CPU, RAM, disk). Just like a building has plumbing, power, and walls shared by all tenants.
Software that decides which tenant gets how much CPU / RAM / disk. Hands out "apartments," keeps tenants isolated, collects rent in the form of resources.
Feels like its own complete computer โ has its own OS, IP address, users, files. But it's actually just a slice of the big building.
vmbr1, vmbr2...)A hallway lets tenants on the same floor talk to each other. A bridge lets VMs on the same "floor" exchange network traffic.
Checks every person entering or leaving. "You can go to the lobby, but not the residential floors unless you're on the list."
vmbr1)Anyone from outside can walk in. That's where your public website lives. Designed to be safely exposed.
vmbr2)No guests allowed. Your databases, AD, file shares live here โ the stuff you absolutely don't want a random internet stranger reaching.
DNS says "the apartment called mailserver.local is at unit 192.168.0.10." AD keeps the list of who's allowed in each apartment.
If an apartment burns down (server crash, ransomware), you can rebuild it exactly from the copies.
Watches all traffic. Alerts if someone jiggles the locks or carries something suspicious. IPS goes further โ it physically blocks them.
Here's the apartment-building analogy as an actual picture. Every room has a real counterpart in what you're building.
Imagine a thief breaks the lobby window. If the lobby connects directly to the vault, game over. But if the lobby is walled off and every door between lobby and vault has a guard checking IDs โ now the thief is stuck in the lobby.
That's what segmentation is: making it so one compromise doesn't become total compromise.
Click the button. You'll see exactly what happens when someone from the outside internet visits your web site.
When you type mail.capstone.local, your computer has no idea where that is. It asks DNS. Watch how the lookup unfolds:
Jump to the Glossary tab โ every acronym and jargon term explained in plain English. Or check the Learn tab for YouTube videos.
In 4 weeks, your team builds a standard Server+ enterprise environment on a physical blade server. You will install, configure, secure, monitor, back up, and present a complete server environment.
Level 1 โ Server+ Core install + services. Enough to pass the Server+ outcomes.
Level 2 โ Advanced Adds AD, Docker, Wazuh, SIEM, Suricata IPS, full audit trail.
Start at Level 1. Add Level 2 bonuses once core works.
Each bar is a team role. Diamonds are milestones / weekly deliverables.
This is every real piece of software that lives somewhere in your build. Hover for a one-line reminder of what it does.
Pick one. Every team needs all four roles filled. Click a card for what you'll own.
Windows Server, AD, DNS/DHCP, IIS, file shares, backups
Linux installs, NGINX, MongoDB, services, scripts, monitoring agents
Proxmox networking, DMZ/Private LANs, firewall, IDS
Docs, diagrams, reports, presentation, audit evidence
Everything runs inside your blade server. Four Proxmox bridges (vmbr0โvmbr3) split traffic into safe zones.
Proxmox web UI (:8006), SSH to host. No VMs serve traffic here.
Why: if an attacker pops a web server, they should NOT be one hop from the hypervisor.
Anything reachable from outside lives here: NGINX, IIS, jump box, monitoring dashboards. Range 172.16.0.0/24.
Crown jewels: AD, DNS, DHCP, SQL, MongoDB, file shares. Never talks directly to the internet. Range 192.168.0.0/24.
Bridges Proxmox to the Cisco switches for real-world VLANs and uplinks.
Three address ranges, three very different trust levels. Memorize these โ you'll type them a hundred times.
The school's existing network. This is where laptops, the school switch, and your Proxmox management IP live. Treat like "the outside world" from your VMs' point of view.
Publicly-exposed services โ the hotel lobby. If one of these gets compromised, the firewall still stands between the attacker and the vault.
Crown jewels. Never directly reachable from outside. All internal VMs point to the Windows Server's IP as their DNS.
Read it as: "from the ROW to the COLUMN, is traffic allowed?". This is the stance you want your OPNsense / iptables rules to enforce.
| โ School LAN | โ DMZ | โ Private LAN | โ Internet | |
|---|---|---|---|---|
| From School LAN | โ | LIMITEDonly published ports (80/443) | DENYnever direct | ALLOWnormal browsing |
| From DMZ | DENYno calling back out | โ | LIMITEDonly the app's DB port | LIMITEDupdates only |
| From Private LAN | DENY | ALLOWfor monitoring | โ | DENYproxy only |
| From Internet | ALLOWit IS the internet | LIMITEDpublished services | DENYabsolutely not | โ |
Rules are evaluated top-to-bottom. The first match wins โ remaining rules are skipped. This is why rule order matters: a broad "allow all" at the top will make every rule below it useless.
Server+ focus: Hardware โ Virtualization โ OS Install โ Basic Networking
Goal by Friday: Proxmox runs, three bridges exist, OPNsense filters traffic, one VM of each OS boots with a static IP.
Pop the chassis. Document every component: CPUs, RAM sticks, drives, NICs, PSU, serial numbers. Reseat RAM, verify drives pass SMART, replace bad thermal paste if needed.
Download Proxmox VE ISO, flash to USB (balenaEtcher or Rufus). Boot the server from USB. Accept licenses. Pick a static mgmt IP on the school LAN (e.g. 10.10.10.50). Set a strong root password.
# After install, from any browser on the school LAN: https://10.10.10.50:8006 # Login as root / password you set
Datacenter โ Node โ System โ Network โ Create โ Linux Bridge.
Click Apply Configuration when done. Bridges only take effect after apply.
Download OPNsense ISO, upload to Proxmox local โ ISO Images. Create a VM:
Windows Server 2022 on vmbr2 (Private LAN). Debian 12 or Ubuntu 22.04 on vmbr1 (DMZ). Give each a static IP.
# Example static IPs
Windows (vmbr2): 192.168.0.10 / 24 gw 192.168.0.1 (OPNsense LAN)
Linux (vmbr1): 172.16.0.10 / 24 gw 172.16.0.1 (OPNsense DMZ)
Open draw.io or Lucidchart. Re-create the diagram on the Network tab of this guide. Create a shared Google Drive / OneDrive folder:
Capstone/ โโโ 01-Hardware/ # inventory, photos โโโ 02-Proxmox/ # install screenshots โโโ 03-Network/ # topology, IP plan โโโ 04-Windows/ # AD, IIS configs โโโ 05-Linux/ # NGINX, scripts โโโ 06-Security/ # hardening evidence โโโ 07-Backups/ # restore proof โโโ 08-Reports/ # weekly write-ups
Level 2 Bonus Promote Windows to a domain controller (dcpromo era command is now Add Roles โ AD DS โ Promote). Install MongoDB on Linux. Create first VLAN tags.
Server+ focus: Server Roles โ Web / File / DB Services โ Basic Monitoring
Server Manager โ Add roles and features โ DNS Server and DHCP Server.
After install:
capstone.local)192.168.0.100 โ 192.168.0.200, router 192.168.0.1, DNS 192.168.0.10Add the Web Server (IIS) role. Drop an index.html into C:\inetpub\wwwroot. Test from the Linux VM:
curl http://192.168.0.10
Create C:\Shares\Team. Right-click โ Properties โ Sharing โ advanced sharing โ share as Team$ (hidden share) โ permissions: Domain Users read, Admins full. Set NTFS ACLs to match.
Install SQL Server Express + SSMS. Create a test DB CapstoneDB.
sudo apt update && sudo apt install -y nginx
sudo systemctl enable --now nginx
# Replace default page
echo "<h1>Capstone DMZ - $(hostname)</h1>" | sudo tee /var/www/html/index.html
Visit http://172.16.0.10 from your laptop (you may need a firewall rule in OPNsense first).
sudo apt install -y mongodb-org
sudo systemctl enable --now mongod
# Simple cron job: log disk every 10 min
(crontab -l; echo "*/10 * * * * df -h >> /var/log/disk.log") | crontab -
Confirm logs rotate via /etc/logrotate.d/.
In OPNsense, open Firewall โ Rules โ LAN and DMZ.
Test: from Linux, curl 192.168.0.10 should fail. From Windows, curl 172.16.0.10 should work.
Write a 2-page doc with screenshots of: DNS console, DHCP scope, IIS default page, NGINX page, segmentation test results. Save to 08-Reports/Week2.md.
capstone.localLevel 2 Bonus Dockerize the NGINX site, deploy a Wazuh agent, export Prometheus metrics, enable NetFlow on OPNsense, stand up WireGuard VPN.
Server+ focus: Hardening โ Permissions โ Backup Scripts โ Logs
Review share + NTFS on Team$. Apply least privilege. In Windows Defender Firewall, block all inbound except SMB (445) from LAN only, and RDP (3389) from the jump box IP only.
Install the feature, schedule a nightly backup of C:\Shares and System State to a second disk. Test a one-file restore to a temp folder โ screenshot the result.
#!/bin/bash
DATE=$(date +%F)
DEST=/backups/$DATE
mkdir -p $DEST
rsync -aAX --delete /etc /var/www $DEST/
tar czf $DEST/mongo-$DATE.tgz /var/lib/mongodb
find /backups -type d -mtime +14 -exec rm -rf {} +
Save as /usr/local/bin/capstone-backup.sh, chmod +x, add to root's crontab at 0 2 * * *.
Edit /etc/ssh/sshd_config (use nano):
Port 2222 PermitRootLogin no PasswordAuthentication no AllowUsers capstoneadmin MaxAuthTries 3
Restart: sudo systemctl restart ssh. Test from another VM before closing your current session.
Enable Suricata/Snort in OPNsense (Services โ Intrusion Detection). Subscribe to the ET Open ruleset. Point it at the WAN and DMZ interfaces. Watch the alerts tab while your Linux specialist runs curl loops.
Re-run the segmentation matrix โ document every src/dst/port and expected result.
Produce a short report listing every hardening step: firewall rules, account lockouts, password policy, SSH keys, backup schedule, IDS alerts observed. Rank residual risks High/Med/Low.
Level 2 Bonus WSUS patch server, advanced GPO baselines (CIS), Bacula CE scheduled backups, SIEM rules in Wazuh, Suricata in IPS mode.
Server+ focus: Backup โ Restore โ DR โ Documentation โ Demo
Pick one VM. Snapshot it. Delete a file or a DB row. Restore it from backup. Record time-to-restore (RTO) and what data was lost (RPO). This is your DR evidence.
Export the OPNsense rule set to PDF. Walk through each rule โ if you can't explain why it exists, delete it. Re-run the segmentation matrix one last time.
Stitch all weekly reports into one PDF. Structure:
15-minute walkthrough. Suggested flow:
Level 2 Bonus Full DR simulation (power off a VM, rebuild from backup on a fresh VM), MISP โ OpenCTI threat intel flow, audit log exports.
This is the actual click-by-click walkthrough from your instructor's config guide. Follow it in order. Screenshot every major step for your docs folder.
These are the service labels printed inside your ML350p Gen8. Keep this tab open while you're working with the hardware โ the diagrams tell you exactly which slot goes where.
31 numbered components: PCIe slots (1โ12), DIMM slots, both processor sockets, Mini SAS connectors (A, B), cache module slot, SATA connectors, SD card slot, TPM connector, iLO connector, and the system maintenance switch (30) โ the one you flip to disable iLO security or clear passwords if you're ever locked out.
Critical when adding or reseating RAM. Rules at a glance:
A, B, C, D, E, FP1:A, P2:A, P1:B, P2:Bโฆ
Systems Insight Display, drive cages, USB, serial number pull-tab, power button. LED color legend:
Here's a stylized front + rear view of the ML350p Gen8 so you know what each port does before you touch a cable.
\\itsdc3\its in the address bar.Rufus-4.7.\\itsdc3\its, pick the Proxmox 6.4-1 ISO.F9 for BIOS (not F2), F8 for the RAID Option ROM (not Ctrl+R), and has an extra F10 "Intelligent Provisioning" menu that can do most of the install for you.Watch the bottom of the screen right after power-on โ HP shows which keys do what for a few seconds. Tap the key you want as soon as you see POST:
| Key | What it opens | Use it for |
|---|---|---|
F9 | System Utilities (RBSU) | BIOS settings, boot order, date/time |
F10 | Intelligent Provisioning | Guided RAID + OS install in one place |
F11 | One-time Boot Menu | Pick USB for this boot only |
F8 | Smart Array Option ROM | RAID configuration (see Part 3) |
Plug in keyboard and monitor. Power on. When you see the HP splash screen, tap F9 repeatedly until you see "System Utilities" load. (On older firmware you'll see "ROM-Based Setup Utility" โ same thing.)
+ / - keys.F10 to save, then Exit.F11 at POST, pick the USB from the one-time boot menu, and keep your normal boot order intact.The ML350p Gen8 has iLO 4 (Integrated Lights-Out) โ a tiny computer inside the server that lets you power it on/off and see the console remotely, even when the OS is dead. Worth setting up.
F9 โ System Configuration โ iLO 4 Configuration Utility.10.10.10.51).https://10.10.10.51On the ML350p Gen8 you have two ways to configure RAID:
Pro: fastest, 100% capacity usable
Con: ONE disk fails โ ALL data gone
Use for: scratch space only. Not for this lab.
Pro: full redundancy, simple
Con: only 50% capacity usable
Use for: OS / boot drive on a 2-disk setup. Great default.
Pro: survives one disk failure, ~67% usable
Con: slow writes, rebuilds are risky
Use for: 3+ disks, balanced cost / safety. Good for data drive.
Pro: fast + redundant, survives multi-disk failures
Con: only 50% capacity, needs 4+ disks
Use for: databases. Best if you have 4+ disks.
Reboot. During POST watch for: "Slot 0 HP Smart Array P420i Controller โ Press <F8> to run Option ROM Configuration for Arrays Utility." Tap F8 as soon as you see it.
From the main menu pick Delete Logical Drive. Highlight an existing drive โ press F8 to confirm delete โ Enter. Repeat until the list is empty.
Space.Enter to create โ F8 to save.Press Esc โ confirm exit. No separate "initialize" step is needed on the HP controller โ the drive is ready once created. If asked, let a quick format run.
Reboot โ at POST, tap F10. First launch asks for basic setup (language, date, network, admin contact). Fill it in and continue.
From the main menu: Perform Maintenance โ HP Smart Storage Administrator (SSA). Pick your Smart Array P420i controller on the left.
Exit SSA, then from IP's main menu pick Configure and Install โ it can drive the Proxmox install next if you want a one-shot flow.
F11 for the boot menu, pick the USB.Hostname: [your team hostname] Static IP: 10.10.10.X/24 # ask instructor for your assigned X Gateway: 10.10.10.1 # school router
Remove USB, reboot. From a school desktop browser:
https://10.10.10.X:8006
Log in with your Proxmox credentials.
https://10.10.10.X:8006\\itsdc3\its, pick the ISO.On the Proxmox host CLI (use nano):
nano /etc/network/interfaces
Add:
auto vmbr1
iface vmbr1 inet static
address 172.16.0.X
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
This establishes the 172.16.0.0/24 subnet for your Jump Box zone.
In the same file, add:
auto vmbr2
iface vmbr2 inet static
address 192.168.0.1
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
This establishes the 192.168.0.0/24 subnet for internal-only VM traffic.
systemctl restart networking
# or full reboot
reboot
172.16.0.X, gateway 172.16.0.1.sudo apt update sudo apt install openssh-server sudo systemctl enable ssh sudo systemctl start ssh
Create a non-root admin:
sudo adduser <username> sudo passwd <username>
Edit the SSH config (use nano):
sudo nano /etc/ssh/sshd_config
Find and change these lines (remove the leading # if present):
PermitRootLogin no PasswordAuthentication yes
Save: Ctrl+O then Enter. Exit: Ctrl+X.
Restrict SSH to trusted subnets (example values โ adjust to your setup):
sudo ufw allow from 10.10.10.0/24 to any port 22 sudo ufw allow from 192.168.0.0/24 to any port 22 sudo ufw allow from 172.16.0.0/24 to any port 22 sudo ufw enable
echo 1 > /proc/sys/net/ipv4/ip_forward echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
iptables -t nat -A POSTROUTING -s 172.16.0.0/24 -d 192.168.0.0/24 -j MASQUERADE
iptables -A FORWARD -s 172.16.0.0/24 -d 192.168.0.0/24 -j ACCEPT
# Port-forward 2222 on the host to the internal SSH jump target
iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 2222 \
-j DNAT --to-destination 192.168.0.2:22
iptables -A FORWARD -p tcp -d 192.168.0.0/24 --dport 22 -j ACCEPT
iptables -A INPUT -p tcp --dport 2222 -j ACCEPT
iptables -t nat -A POSTROUTING -o vmbr1 -j MASQUERADE
iptables -t nat -A POSTROUTING -o vmbr0 -j MASQUERADE
This lets traffic from the School LAN and Jump Box subnet reach the internal network.
apt install iptables-persistent
netfilter-persistent save
# or:
iptables-save > /etc/iptables/rules.v4
# From Proxmox host โ Jump Box ssh [email protected] # From the Jump Box โ internal VM ssh <user>@192.168.0.X
192.168.0.X, gateway 192.168.0.1.192.168.0.X, gateway 192.168.0.1.192.168.0.10). That box will become your AD / DNS server.So internal VMs can reach the Jump Box at 172.16.0.X:
Windows (PowerShell as Administrator):
route add 172.16.0.0 mask 255.255.255.0 192.168.0.1
Linux:
sudo ip route add 172.16.0.0/24 via 192.168.0.1
Distilled from the official HPE ProLiant ML350p Gen8 User Guide (Part 661082-008R, Edition 9). Every table here is the answer to a question you'll have during the lab โ what does this LED mean, which key do I press, how do I populate RAM, etc.
๐ Full manual (139 pages): ML350p-Gen8-User-Guide.pdf โ open in a new tab, searchable.
Your unit's serial #: 2M251705M9 โ printed on the pull-tab at the front of the chassis; record this on every deliverable (Week 1 report, asset sheet, iLO config).
Watch the bottom of the screen right after power-on. These keys are only active for a few seconds during POST.
| Key | Opens | Use when you want toโฆ |
|---|---|---|
F9 | RBSU / System Utilities | Change BIOS settings, boot order, configure iLO, re-enter serial # |
F10 | Intelligent Provisioning | Guided RAID + OS install, run SSA, view Active Health log |
F11 | One-time boot menu | Boot once from USB without changing boot order |
F12 | PXE network boot | Net-install without any media |
F8 | Smart Array Option ROM (ORCA) | Create/delete logical drives on the P420i from text menu |
Four LEDs on the front bezel tell you the server's health without you having to plug in a monitor. Memorize these โ when something's wrong, this is the first thing a technician reads.
| LED | State | What it means |
|---|---|---|
| Power | Solid green | System is on and running |
| Flashing green (1 Hz) | Performing power-on sequence | |
| Solid amber | Standby (plugged in, not on) | |
| Off | No power โ check cord, PSU, power button cable | |
| NIC | Solid green | Linked to network |
| Flashing green | Network activity | |
| Off | No network activity | |
| Health | Solid green | Normal |
| Flashing amber | System degraded โ check Systems Insight Display | |
| Flashing red (1 Hz) | System critical | |
| Fast-flashing red (4 Hz) | Power fault | |
| UID | Solid blue | Activated โ lets you find this server in a rack |
| Flashing blue (1 Hz) | Remote mgmt via iLO OR firmware upgrade in progress | |
| Off | Deactivated |
| Port | Purpose |
|---|---|
| NIC 1 โ 4 | Four 1 Gbps data NICs (HP 331i or 361i embedded). These carry your vmbr0/1/2 traffic. |
| iLO | Separate dedicated NIC for remote management. Not one of the four above โ it's labeled with the iLO icon. |
| Video (VGA) | Plug in a monitor directly. Also available virtually via iLO Remote Console. |
| Serial | DB-9 serial console port โ old-school admin, still useful when nothing else works. |
| USB ร 4 | Keyboards, boot media, licensing dongles. Front has 4 more. |
| PSU 1 โ 4 | Up to four hot-swap redundant power supplies. 460W / 750W / 1200W options. |
| PCIe slots 1 โ 9 | Slots 1โ4 belong to Processor 1, slots 5โ9 to Processor 2 (slots 5โ9 only work with 2nd CPU installed). |
| Kensington lock | Physical theft deterrent. |
A bank of DIP switches on the system board. Defaults are all Off. You flip these only to recover from being locked out or to force firmware recovery.
| Switch | Off (default) | On |
|---|---|---|
S1 | iLO 4 security enabled | iLO 4 security disabled (recovery) |
S2 | System config can be changed | System config locked |
S5 | Power-on password enabled | Power-on password disabled |
S6 | No function | ROM reads config as invalid = clears CMOS + NVRAM |
S3, S4, S7โS12 | Reserved โ do not change | |
Your ML350p has 24 DIMM slots total โ 12 per processor. Populating them wrong = the server won't POST or runs in degraded mode.
A, B, C, D, E, F, G, H, โฆP1:A, P2:A, P1:B, P2:B, P1:C, P2:C, โฆAMP modes (Advanced Memory Protection) โ set in RBSU:
Each drive caddy has 4 LEDs. When you're troubleshooting a RAID issue or swapping a failed disk, this is your decoder.
| LED | State | Meaning |
|---|---|---|
| Locate (1) | Solid blue | Drive being identified by a host app (e.g. SSA "locate") |
| Flashing blue | Drive firmware update in progress | |
| Activity (2) | Rotating green | Drive is active (I/O happening) |
| Off | No drive activity | |
| Do not remove (3) | Solid white | Do not remove โ pulling this drive fails a logical drive |
| Off | Safe to remove | |
| Drive status (4) | Solid green | Member of one or more logical drives |
| Flashing green | Rebuilding / migrating / expanding / erasing | |
| Flashing amber/green | Drive is active but predicted to fail soon โ replace proactively | |
| Flashing amber | Unconfigured, predicted to fail | |
| Solid amber | Drive has failed | |
| Off | Not configured by a RAID controller |
When the front Health LED goes amber or red, the SID tells you which subsystem is unhappy. Common combinations:
| SID LED | Health LED | Power LED | Condition |
|---|---|---|---|
| Processor (amber) | Red | Amber | CPU failed / not installed / unsupported |
| Processor (amber) | Amber | Green | CPU pre-failure |
| DIMM (amber) | Red | Green | One or more DIMMs failed |
| DIMM (amber) | Amber | Green | DIMM pre-failure |
| Overtemp (amber) | Amber | Green | Cautionary temperature |
| Overtemp (amber) | Red | Amber | Critical temperature โ server may shut down |
| Fan (amber) | Amber | Green | Fan failed but still meets minimum redundancy |
| Fan (amber) | Red | Green | Fan failed, no longer meeting minimum |
| PSU (amber) | Amber | Green | Redundant PSU failed (server still runs) |
| Tool | When it runs | What it's for |
|---|---|---|
| iLO 4 | Always (independent of OS) | Remote power, remote console, virtual media, Active Health System log, SNMP alerts. Reach via https://iLO-IP. |
| Active Health System | Continuous | Passive monitoring. Records model, serial, CPU, storage, memory, firmware changes. Log can be exported via iLO or IP. |
| Integrated Management Log (IML) | Continuous | Event log with 1-minute timestamps. View from iLO web UI or HPE SIM. |
| Intelligent Provisioning (IP) | Offline (F10 at POST) | Guided OS install, RAID setup via SSA, maintenance tasks. Replaces old SmartStart CD. |
| RBSU (ROM-Based Setup) | Offline (F9 at POST) | Traditional BIOS โ boot order, AMP memory mode, primary controller, serial # re-entry. |
| Smart Storage Administrator (SSA) | Online or offline via IP | Graphical RAID config. Online array expansion, rebuilds, SmartSSD wear gauge. |
| ORCA (Option ROM Config for Arrays) | Offline (F8 at POST) | Text-menu RAID โ create/delete logical drives, set boot controller. |
| Service Pack for ProLiant (SPP) | Online or offline | Bundled firmware + drivers update for the whole server. Run once a year. |
| HP Smart Update Manager (SUM) | Online | Deploy firmware/drivers across many servers from one place. |
| Automatic Server Recovery (ASR) | Always | Watchdog timer. If OS hangs, server auto-restarts after a timeout. |
| ROMPaq | Offline | System firmware (BIOS) upgrade from USB. |
Press the front Power On/Standby button. System goes from standby โ on. Watch the power LED: flashing green = boot sequence running, solid green = running.
shutdown /s or sudo systemctl poweroff).Hold the Power On/Standby button for 4+ seconds. Only use this when the OS is frozen โ data loss possible.
Up to 2ร Intel Xeon E5-2600 v1/v2
Up to 8 cores per socket
LGA 2011 socket
24 DIMM slots total (12 per CPU)
DDR3 ECC, up to 768 GB
AMP modes in RBSU
Up to 24ร 2.5" SFF or 18ร 3.5" LFF
Smart Array P420i onboard
SAS / SATA hot-plug
4ร 1 Gbps embedded NICs
Dedicated iLO 4 management NIC
PCIe expansion available
Up to 4 hot-swap PSUs
460 W / 750 W / 1200 W options
92โ94% efficiency (Gold/Platinum)
5U tower
Rack-convertible (rails available)
4 hot-plug fans
The Team Lead owns this spreadsheet. Populate it with host + VM info in Week 1; update every time hardware or software changes. Export a snapshot for each weekly report.
/Users/haktang/Downloads/IT-Asset-Tracking-Spreadshee.xlsx. Three sheets: Hardware Vendor List, Hardware Asset, Software Asset Installation.| Vendor | Product | Description | Cost | Contact | Address |
|---|---|---|---|---|---|
| CISCO | 2960 | Switch | โ | Anthony Pena [email protected] |
6300 La Calma Dr Ste 350 Austin, TX 78752 |
| HPE | ProLiant ML350p Gen8 | Tower server (Proxmox host) | $300 | Anthony Pena [email protected] |
6300 La Calma Dr Ste 350 Austin, TX 78752 |
Total asset value: $710
| Item # | Name | Type | Location | Qty | Unit $ | Total $ | Condition |
|---|---|---|---|---|---|---|---|
| 100 | Cisco 2960 | Switch | Storage Room | โ | โ | $0 | Poor |
| 101 | TP-Link TL-SG1024D Main class switch |
Switch | Server Room | 1 | $110 | $110 | Normal |
| 102 | HPE ProLiant ML350p Gen8 Proxmox / web server (2x Xeon E5-2600, Smart Array P420i, iLO 4) |
Server | Server Room | 2 | $300 | $600 | Excellent |
| Host Item # | Hardware | OS | Program | Version | Remarks |
|---|---|---|---|---|---|
| 102 | HPE ProLiant ML350p Gen8 (Web) | Fedora 39 | NGINX | 1.72.2 | NGINX Plus R33 |
| 102 | HPE ProLiant ML350p Gen8 (Web) | Fedora 39 | MySQL | 1.0.1 | Internal database |
This mirrors the Capstone Week1.docx form. Fill it in as you go; submit as PDF or Word.
Date: ___________ Team: ___________
Team members & roles:
Physically inspect and assemble the server, install Proxmox, create foundational VMs, and produce a network diagram reflecting a small-school IT environment.
System Utilities (F9) โ System Information, or look at the front pull-out tag for model / serial.| Component | Expected requirement | Actual spec | Notes |
|---|---|---|---|
| CPU cores | โฅ 4 virtualizable cores | 4 cores ยท 1ร Xeon E5-2609 v2 @ 2.5 GHz | Single CPU; socket 2 empty |
| Virtualization support | Intel VT-x / AMD-V | Intel VT-x + VT-d โ | Confirmed in BIOS |
| RAM | โฅ 16 GB | 32 GB ยท 4ร 8 GB DDR3L-1600 ECC RDIMM | Quad-channel; 4 of 24 slots used |
| Storage | โฅ 200 GB local | 3 TB total ยท 3ร 1 TB SATA in RAID 5 | Hardware RAID via Smart Array P420i; 1 drive of capacity reserved for parity |
| NICs | โฅ 2 (1 mgmt, 1 VMs) | 4ร 1 GbE ยท HP 331i (Broadcom BCM5719) | + iLO 4 dedicated RJ-45 for remote mgmt |
F8)/dev/sda (LVM thin pool for VM disks: local-lvm)tctmachine10.10.10.10/16 ยท Gateway 10.10.10.1 ยท DNS 1.1.1.1https://10.10.10.10:8006Installation issues or notes:
vmbr0 (bridges eno1 to school LAN)10.10.10.10/16 ยท GW 10.10.10.1vmbr1 ยท DMZ 172.16.0.0/24 ยท host IP 172.16.0.10 | vmbr2 ยท LAN 192.168.0.0/24 ยท host IP 192.168.0.10| VM name | OS & version | Static IP | Assigned role | Responsible |
|---|---|---|---|---|
| jumpbox (VM 101) | Ubuntu Server 24.04 LTS | 172.16.0.2/24 (vmbr1) | Hardened SSH jump host ยท DMZ entry point | ๐ Networking |
| WindowsServer01 (planned Wk 2) | Windows Server 2022 | 192.168.0.2/24 (vmbr2) | DNS / DHCP / IIS / AD | ๐ช Windows |
| LinuxServer01 (planned Wk 2) | Ubuntu Server 24.04 | 192.168.0.3/24 (vmbr2) | NGINX ยท MariaDB ยท NTP ยท Syslog | ๐ง Linux |
Create a diagram showing the Proxmox host, vmbr0, and your two base VMs. Pick one tool:
Include:
Every deliverable = a document + supporting screenshots in your docs folder.
Each tile opens a YouTube search for that exact topic in a new tab. I'm using search links instead of hardcoded videos so you always get the most recent / highest-rated tutorials instead of dead links.
Full install walkthrough, from USB boot to first login
Click-by-click VM creation
vmbr0, vmbr1, vmbr2 and what they do
Concept explainer, no prior knowledge needed
System Utilities, boot order, walkthrough
Create logical drives on the Gen8 controller
Guided RAID + OS install in one wizard
Remote power + console access, even when the OS is down
Full OS install from ISO
Add the AD DS role, promote, first users
Zones, scopes, options โ the full combo
Hosting a site on Windows
Why share and NTFS are different layers
Your internal database
Schedule + restore test
Base OS and netplan configuration
Web server basics
NoSQL database, your Week 2 deliverable
Scheduling scripts (backups, log rotation)
Change port, disable root, key-only login
Incremental backups done right
Ubuntu's simple firewall front-end
server.capstone.local) to IPs.0 2 * * * /path/script.sh run a script at 2am daily.