handmade.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
handmade.social is for all handmade artisans to create accounts for their Etsy and other handmade business shops.

Server stats:

35
active users

#raid

0 posts0 participants0 posts today
Continued thread

Talking with the folks in the local #vintage / #retrocomputing community, they clued me in that the #ThinkPad #RAID is a steaming pile of 💩 and not worth the trouble.

🤷 Oh well. Thanks for cluing me in

So I swapped out the two testing #NVMe drives I was using and reinstalled the original sticks - to have #windoz10 demanding for the #bitlocker recovery key. 🤦‍♂️

Well, time to wipe & install #windoz11 then.

Install went fine, only 4 rando #drivers to find for all #devices to be recognized and working.

Using my #CTT scripts to install the majority of applications, then to remove the #spyware #bloatware and other garbage #micro$oft added to #windows11

Then migrate my #data from my other ThinkPad. Welcome to my #sunday #funday

#siliconValley #SillyValley #sanfrancisco #sanfran #sanfranciscocomputers #sanfrancomputers #sanfranciscovintagecomputers #sanfranvintagecomputers #sanfranciscovintagehardware #sanfranvin-tagehardware
#vintagecomputing #vintagecomputint #vintagecomputer #vintagecomputers #vintagecomputalk
#vintagehardware #computerHistory #retro #VCF #vintageComputerFestival
#retrocomputing #retroComputers #WallOfRetro #retroTech #retroTechnology
#nerdsOfVintage #happyNerding
#computer #tech #computerHardware #laptop #laptops
#IBM #thinkpad #thinkpads #VintageThinkPad #X86 #WindowsVista #IBMhardware #lenovoHard-ware #Thinkpadnium
#upcycle #restore #TechnologyRepair #ThinkPadRepair #WasteNotWantNot #Thinkpadnium
#makeShitMonday #showmewhatyougot

ICE Raid Resources - Housing Not Handcuffs
housingnothandcuffs.org/icerai

> ICE is now targeting homeless people. This inhumane directive is just one page of the Trump administration’s anti-Black, anti-homeless, anti-migrant, anti-trans and anti-poor agenda.

Housing Not Handcuffs · ICE Raid Resources for Homeless Shelters and Service ProvidersView ICE raids resources, including what to do before, during, and after a raid, know your rights materials, messaging guidance and more.
Replied in thread

@nazokiyoubinbou

Yeah—Announcing the raid ahead of time is an invite to organized resistance, which can lead to violence & provide trump his excuse to declare martial law.

If it doesn’t doesn’t, there will be a new raid on another city next week.

Think of trump’s 2017 Muslim ban & the protests it triggered; only this time the protests are quelled by US troops under the direction of Hegseth. Scary‼️

#trump #immigration #chicago #raid

@GottaLaff

I have an old #NAS - #Synology DS214se, with two HDDs in RAID mode. It is 10 years old, and unfortunately, it cannot run #Docker. So, I mostly use it as a Samba shared disk. I want to find something new to replace it, and I'm open to exploring options beyond Synology. I haven't kept up with changes in hardware, but I need something with #RAID. Regarding software, it would be good to have something based on classic #Linux, but it's not critical. Any suggestions?

With this new NAS comes some new choices, and I'd love thoughts on this.

I have 5 large (14tb) HHDs and 2 3tb SSDs.

The NAS's main use is as a home storage server, mostly to store movies for use with another machine running Jellyfin. There's a secondary use for backups. That means I don't need super fast access and I don't have lots of small files (except for the backups).

Most of what the server will be doing is reading large, sequential data in the movie files. Even if there are two, or three simultaneous users, they'll be network IO bound anyway.

I'm worried about storage failure, but I think ZRAID2 across all 5 drives with no spares is okay. If two of the five drives fail, I canshut the entire thing down until I can get a replacement drive.

I don't seem to have a strong need for using the drives for Log, Read Cache, or Metadata, but I should use them for *something*.

Does anyone have thoughts on this? What's the best use for these two drives?

#ZFS
#RAID

Replied in thread

@Gentoo_eV Given that I get a KVM console in time, I will demonstrate my installation guide (gentoo.duxsco.de/) in English using a #Hetzner dedicated server.

  • What? Beyond Secure Boot – Measured Boot on Gentoo Linux?
  • When? Saturday, 2024-10-19 at 18:00 UTC (20:00 CEST)
  • Where? Video call via BigBlueButton: bbb.gentoo-ev.org/

The final setup will feature:

  • #SecureBoot: All EFI binaries and unified kernel images are signed.
  • #MeasuredBoot: #clevis and #tang will be used to check the system for manipulations via #TPM 2.0 PCRs and for remote LUKS unlock (you don't need tty).
  • Fully encrypted: Except for ESPs, all partitions are #LUKS encrypted.
  • #RAID: Except for ESPs, #btrfs and #mdadm based #RAID are used for all partitions.
  • Rescue System: A customised #SystemRescue (system-rescue.org/) supports SSH logins and provides a convenient chroot.sh script.
  • Hardened #Gentoo #Linux for a highly secure, high stability production environment.
  • If enough time is left at the end, #SELinux which provides Mandatory Access Control using type enforcement and role-based access control

@marcan Well, #ZFS and #Ceph have entirely different use-cases and original designs.

  • Ceph, like #HAMMER & #HAMMER2 was specifically designed to be a #cluster #filesystem, whereas ZFS & #btrfs are designed for single-device, local storage options.

  • OFC I did see and even setup some "cursed" stuff like Ceph on ZFS myself, and yes, that is a real deployment run by a real corporation in production...

forum.proxmox.com/threads/solu

Still less #cursed than what a predecessor of mine once did and deploy ZFS on a Hardware-#RAID-Controller!

Proxmox Support Forum[Solution] CEPH on ZFSHi, i have many problems to install Ceph OSD on ZFS I Get you complet solution to resolve it: Step 1. (repeat on all machine) Install Ceph - #pveceph install Step 2. (run only in main machine on cluster) Init ceph - #pveceph init --network 10.0.0.0/24 -disable_cephx 1 10.0.0.0/24 - your...