[Off Topic] Home server - need advice

I'm looking to build a home server for firewall duties, DHCP, NAT, backup and to run a couple of virtual machines inside KVM. Basically it'll replace my current router so that I'll have a bit more flexibility with my home network.

The networking stuff and backup are pretty simple to do. I'll put them in separate virtual machines and can get that sorted out nice and quickly. The problem is I also want a virtual machine dedicated to running my development environment. I want to be able to use something like VNC to connect to it and use it from my main desktop computer (currently running Windows 8 Pro). I have a couple of questions about that though. If I set up Linux in a virtual machine will I be able to view the desktop of the Linux VM at my Windows machines native resolution (2560 x 1440) using a VNC viewer? I basically want to be able to use the machine as if I was sitting at it. According to the specs of the integrated graphics of the CPU I am planning on buying it does support that resolution but only if outputting via DisplayPort. Since this will be VNC over ethernet will this cause any issues?

I'm pretty new with dealing with local servers and VNC in particular so sorry for the stupid questions.

10 Replies

You can use whatever resolution you want over VNC if you set it up right, but you should consider using NX instead of VNC. It will perform much better than VNC, particularly at such high resolutions.

If we're doing "shoulda's" he should use a real bare metal hypervisor like ESXi or Xen and skip the VM on top of a OS clusterf**k that KVM is.

@Guspaz:

You can use whatever resolution you want over VNC if you set it up right, but you should consider using NX instead of VNC. It will perform much better than VNC, particularly at such high resolutions.

Thanks. I'll look into that.

@vonskippy:

If we're doing "shoulda's" he should use a real bare metal hypervisor like ESXi or Xen and skip the VM on top of a OS clusterf**k that KVM is.

The only reason I wanted to use KVM was because I heard it ran FreeBSD better than other options (I want the networking VM and backup VM to be running FreeBSD). I'm open to other options though.

I guess you would need to define "ran FreeBSD better" to decide.

FreeBSD 9 (and 8) is officially supported by VMWare (using ESXi 5).

http://partnerweb.vmware.com/GOSIG/FreeBSD_9x.html

And the interweb is chock full of articles on running FreeBSD on top of ESXi, example

http://professionalvmware.com/2012/01/i … vsphere-5/">http://professionalvmware.com/2012/01/installing-freebsd-9-0-on-vsphere-5/

I was really hoping to avoid using ESXi.

KVM was my preference as I know lots of people use it for FreeBSD virtualisation. Frankly though for my uses it should be fine. I haven't really heard too many negatives about KVM, why are you so down on it? Is there anything I should know before deploying it?

I've pretty much migrated all my VM boxes to Bare Metal Hypervisors.

The main reason is security - much harder to hack a specialty OS (which is what the core bare metal hypervisor is), then to hack a general OS (which is what KVM runs on top of).

Then there's less management/maintenance. The bare metal hypervisor is basically install and forget, the general OS is a never ending patch cycle and security tweaking project.

Although there's much debate (and several different benchmarks to fish the results you want to prove your point from), for the setups we run, a bare metal hypervisor is just (way) more efficient. The box's resources go to the Guest OS's instead of the Host OS.

Finally, I just like ESXi and it's management tools. Easy to setup, easy to manage, easy to monitor. KVM just seems way less mature. It's clunky, and for me anyways, way harder to setup, especially getting the third party management tools to work (like ProxMoxVE or Archipel).

Speedwise, KVM has made some serious improvements in efficiency, so it's no longer a slam dunk just to pick Xen or ESXi because their better performers, the VM playing field is pretty level performance wise across the entire selections (i.e. ESXi, Xen, KVM, Parallels, and Hyper-V Server).

Of course YMMV, so best to setup both ESXi and KVM and see what YOU think is better.

Cool. Thanks for the help :).

One last newbie question (I hope), if I have physical access to the host system is it possible to jump into a running VM from the host OS or do you need to connect to the guest VM via a network connection (SSH, VNC etc)?

Depends on the VM setup (i.e. KVM, ESXi, Xen, VirtualBox, etc) and all of them have their own way of connecting to the GUEST VM's, but in general Guest OS's running under a VM are mere ghosts in the machine. You need to connect to them just like they were a remote host running somewhere in the cloud.

I don't understand the KVM hate. Proxmox VE makes KVM management such a piece of cake it's crazy. All you need is a web browser. I'd tell you to snag Proxmox, and let it do all the work for you. I've got a custom install of proxmox running (so I could do RAID and other such things), and it's not hard to set up at all. If you want to do that, you can essentially install Debian yourself, and then tack on the Proxmox packages after the fact. Enjoy KVM. I surely do. And you're right. VirtIO support in FreeBSD is awesome. It runs quite nicely. The FreeBSD paravirt drivers for VMware seriously suck. ESXi is not the slam-dunk its painted to be. Use what works best for you.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct