top of page

server-parts.eu Blog
Featured
Latest


Dell PowerEdge XE9680 with 8× NVIDIA H200 GPUs: Special Offer
Dell PowerEdge XE9680 with 8× NVIDIA H200 GPUs. Enterprise AI server for large-scale AI training, HPC, and GPU clusters. Refurbished and deployment-ready.


NVIDIA B300 vs NVIDIA B200: What’s the Difference?
What’s the difference between NVIDIA B300 and B200 GPUs? A quick comparison of Blackwell architecture, HBM3e memory (288 GB vs 192 GB), and AI inference performance for modern data center workloads.


NVIDIA Blackwell Ultra B300: Full Specs, 288GB HBM3e Memory, 15 PFLOPS FP4, Architecture & GB300 Platform
NVIDIA B300 (Blackwell Ultra) explained: architecture, HBM3e memory, Tensor Cores, NVLink, NVSwitch, and the GB300 AI platform powering large-scale AI and LLM infrastructure.


NVIDIA B200 vs B300 GPU Comparison: Performance, Memory, Architecture & Power Consumption
NVIDIA B200 vs B300 GPU comparison: architecture, 192GB vs 288GB HBM3e memory, AI performance, power consumption, and deployment in modern data-center AI systems.


NVIDIA B300 Specs: 288GB HBM3e Memory, Power Consumption & Pricing
NVIDIA B300 Blackwell GPU explained: architecture, 288GB HBM3e memory, NVLink 5, power usage, pricing estimates, and AI server platforms like HGX and DGX.


Supermicro SYS-421GE-TNHR2-LCC server with 8x NVIDIA H100 80GB SXM GPUs
Supermicro SYS-421GE-TNHR2-LCC 4U liquid-cooled server with 8x NVIDIA H100 80GB SXM GPUs, dual Intel Xeon Platinum 8558P, 1TB DDR5, NVSwitch, and redundant 5250W PSUs. Built for enterprise AI training and large-scale inference.


NVIDIA HGX H100 SXM5 8-GPU AI Server: Special Offer
Special Offer: NVIDIA HGX H100 SXM5 8-GPU AI server for AI training, inference, and HPC. Includes H100 SXM GPUs, NVLink, NVSwitch, and high-bandwidth HBM3 memory for data center deployments.


On-Prem (In-House) AI: Choosing the Right GPU Servers
On-prem (in-house) AI in 2026: best NVIDIA GPU servers and certified enterprise platforms for building reliable, scalable AI infrastructure.


NVIDIA Blackwell B100 vs B200 Comparison: What is the difference?
NVIDIA B100 vs NVIDIA B200 comparison explained simply. Learn the key differences in performance, memory, power, cooling, scalability, and which Blackwell GPU is the better fit for your AI data center.


NVIDIA DGX A100 with 8x A100 40GB GPUs – Special Offer: €50,000
Refurbished NVIDIA DGX A100 available for €50,000. 8× A100 40GB SXM4 GPUs, NVSwitch, 1TB RAM, NVMe storage, 200Gb InfiniBand, 3-year warranty, pay after testing.


NVIDIA DGX H100 Server with 8× H100 80GB GPUs - Special Offer for 200K EUR
Special offer: NVIDIA DGX H100 with 8× H100 80GB GPUs for 200K EUR, fully tested and ready for LLM training, HPC clusters, RAG, and enterprise AI workloads.


Best Cloud Providers in Poland: Local Polish Data Centers Where Your Data Stays Safe
Compare leading Polish cloud providers like Atman, Beyond.pl, Polcom and Comarch. GDPR/RODO-compliant solutions that keep data safe from U.S. CLOUD Act and FISA.


HPE Cray Storage Explained: Modular Solutions for HPC, AI, and Exascale Computing
HPE Cray Storage: Modular, high-performance solution for HPC/AI with fast data movement, scaling compute, storage, and cache.


The Best AI Servers for Enterprises: Dell, HPE, Lenovo, and Supermicro Compared
AI servers with NVIDIA GPUs for HPC and deep learning from Dell, HPE, Lenovo, and Supermicro, including models with NVIDIA H100 and A100 GPUs for enterprise AI workloads. Server-parts.eu. Refurbished.
bottom of page






