

It sucks, because all things considered, they’re great little devices. I really like mine.
It sucks, because all things considered, they’re great little devices. I really like mine.
They absolutely do. But it’s a symptom of capitalism. They must seek higher and higher profits each year. And this is one of their ideas to seek higher profits…
Required? That’s quite a commitment. Is this a Cloudflare thing?
There are specific TLD which are required at the DNS level to be served over HTTPS. .dev
is an example. The browser will physically not load a .dev
domain over anything but HTTPS.
I self-host on a .dev domain. It’s extremely simple with Caddy, as its HTTPS by default. Anything else is kind of a pain in the ass sometimes.
I also know of those who’ve had great success with Lego although I’ve never personally used it.
The new service includes a WiFi 7 router
I don’t recommend it.
I would shoot for a 4 port 2.5Gbe unmanaged switch with 2 SFP+ ports (6 total ports) for 10G networking. 2.5Gbe is going to be more than enough for any WiFi solution you choose with room to upgrade 10G to WiFi if you wanted to spend a bit more on a higher tier WiFi router still leaving a single SFP+ port for 10G networking from your PC.
Biggest hit for your buck. Gonna set you back $40-50.
but if I ever wanted to get the max out of it, what does that take?
Kind of a lot. At least a top to bottom upgrade, from modem (PON), to 10G networking, to new Ethernet cables, to new 10G network drivers. Looking at a few hundred if you do it right. I also had Optimum’s 8Gbps internet and was never able to even get anywhere near advertised speeds due to network saturation. IMO, the upgrade right now is too expensive to justify the expense for what you get. If you were confident you would be able to max out the connection, that would be a different story. But ultimately it’s gonna be up to you. If you don’t mind dropping a few hundred on upgrades, then go nuts.
Reasoning skills and experience. There are entire botnets dedicated to finding servers with open SSH ports on 22. If the bots can connect, the IP of the server will be added to a list to be brute forced.
I’m a per diem linux systems administrator. Right now I have a VPS that I setup myself. It uses a non-standard ssh port, fail2ban, and rejects incoming connections to port 22. According to connection logs, I get about 200 attempts per 24 hours from bots randomly pinging ports to see if they can catch an open SSH port–and they’re banned via fail2ban.
I checked out some other servers that I manage, which I did not setup and have no control over how they operate. Sifting through just 3 random servers and checking connection logs, they have a combined 435,000 connection attempts in the past 6 hours between the 3 of them. These are relatively small servers with an extremely small presence. Simple fact of the matter is, is that they all have port 22 open and reachable. So botnets attempt to brute force them.
So just anecdotally that’s a difference of 0.0459770115% or 99.96%. Anyone telling you that changing the default SSH port doesn’t do anything for security has absolutely no practical experience at all. It significantly reduces your attack surface as bots have to guess at ports until they find your SSHd’s operational port to even begin to start sending attempts.
And I’m a CEHv7. A literal security professional–and I say that an overwhelming vast majority of attacks against servers using SSH are going to come over the default port. Quite literally 99%. This means that you can lower your attack surface by exactly 99% by simply changing the default SSH port…
Those posts provide no meaningful insight and what they say is by the very technical of all interpretations is correct, I absolutely disagree with these statements. What they mean to say is that simply changing the default SSH port isn’t alone I means of strictly protecting yourself. Meaning you shouldn’t change the default SSH port and think that your server is secured because it’s not.
Quite the different interpretation than me saying it should be mandatorily a part of your security strategy.
In protecting yourself against port scanning is trivial.
Anyone underestimating the power of changing The default SSH port is someone who’s opinion I can safely disregard.
Using a nonstandard port doesn’t get you much
Uhh… It gets you a lot. Specifically, unless you know the port you can’t connect… So hey, there’s that…
This community really says shit sometimes that makes me go cross-eyed…
I’m not going to do anything enterprise.
You are, though. You’re creating a GPU cluster for generative AI which is an enterprise endeavor…
consumer motherboards don’t have that many PCIe slots
The number of PCIe slots isn’t the most limiting factor when it comes to consumer motherboards. It’s the number of PCIe lanes that are supported by your CPU and the motherboard has access to.
It’s difficult to find non-server focused hardware that can do something like this because you need a significant number of PCIe lanes to accommodate your CPU, and your several GPUs at full speed. Using an M.2 SSD? Even more difficult.
Your 1 GPU per machine is a decent approach. Using a Kubernetes cluster with device plugins is likely the best way to accomplish what you want here. It would involve setting up your cluster, installing the drivers for your GPU (on each node) which then exposes the device to the system. Then when you create your Ollama container, in the prestart hook, ensure you expose your GPUs to the container for usage.
The issue with doing this, is 10Gbe is very slow compared to your GPU via PCIe. You’re networking all these GPUs to do some cool stuff, but then you’re severely bottle-necking yourself with your network. All in all, it’s not a very good plan.
I like to use a justfile
to do this all in one fell swoop;
default:
just --list
caddy-refresh:
caddy fmt --overwrite ~/.caddy
caddy validate --config /etc/caddy/Caddyfile -a caddyfile
caddy-reload: caddy-refresh
doas docker exec -it caddy caddy reload --config /etc/caddy/Caddyfile
~/.caddy
is my caddyfile, which is system linked to /etc/caddy/Caddyfile
. Doing it this way ensures there are no permission issues, and you don’t need sudo to edit your caddyfile. So you simply nvim ~/.caddy
, make your changes, and then run just caddy-reload
, which runs caddy-refresh
before reloading the caddy config via docker.
Works great, and only involves one command.
deleted by creator
Why the hell does everything have to be AI for you people to be happy? I just plain don’t understand it. We know that AI hurts your critical thinking and reasoning skills, and we continue to just pack AI into everything… Doesn’t make sense. Sooner or later you’re gonna need to ask ChatGPT whether or not you need to wipe your own ass or not.
About the best you can do.
Caddy manages everything, including certs for both domains. So I guess my answer would be, you don’t.
Caddy does not need 80 and 443.
By default and all measurable expectation it does. Unless you can’t use privileged HTTP/HTTPS ports, there’s no real reason to use unprivileged ports.
Besides, op doesn’t mention having problems with ports
OP said he was having issues, and this is a common issue I’ve had. Since he was non-descript as to what the issues were, it’s really not stupid to mention it.
Well that’s dope… Didn’t know that was a thing.
The biggest issue I have with Caddy and running ancillary services as some services attempt to utilize port 80 and/or 443 (and may not be configurable), which of course isn’t possible because Caddy monopolizes those ports. The best solution to this I’ve found is to migrate Caddy and my services to docker containers and adding them all to the same “caddy” network.
With your caddy instance still monopolizing port 80 and 443, you can use the Docker expose
or port
parameters to allow your containers to utilize port 80 and/or 443 from within the container, but proxify it on the host network. This is what my caddy config looks like;
{
admin 127.0.0.1:2019
email {email}
acme_dns cloudflare {token}
}
domain.dev, domain.one {
encode zstd gzip
redir https://google.com/
}
*.domain.dev, *.domain.one {
encode zstd gzip
@book host bk.domain.dev bk.domain.one
handle @book {
reverse_proxy linkding:9090
}
@git host git.domain.dev git.domain.one
handle @git {
reverse_proxy rgit:8000
}
@jelly host jelly.domain.dev jelly.domain.one
handle @jelly {
reverse_proxy {ip}:8096
}
@status host status.domain.dev status.domain.one
handle @status {
reverse_proxy status:3000
}
@wg host wg.domain.dev wg.domain.one
handle @wg {
reverse_proxy wg:51820
}
@ping host ping.domain.dev ping.domain.one
handle @ping {
respond "pong!"
}
}
It works very well.
It’s likely illegal. The administration would call it theft of service because it’s not authorized and they wouldn’t be wrong. I also don’t see why you would want to do it. You’re giving the IT department at your school complete access to your web history.
Lots of things are improved with a GUI. IMO this is one of them.
Having a no-nonsense and predictable folder structure to store documents makes sense for those who are organized. For those who aren’t, you can still use projects like this to sort data so they’re retrievable by everyone, not just those who know and understand your folder structure.
The intake emails are particularly interesting. Receive email with attachment and save it automatically. Excellent for repetitively collecting data without setting anything extra up. Just create an email alias for your intake, and distribute it. Wait for people to email shit to you.
Great idea, IMO.