Quantcast
Channel: linuxadmin: Expanding Linux SysAdmin knowledge
Viewing all 17871 articles
Browse latest View live

How do you memorize all of the command names?

$
0
0

Still young in my Linux career, so maybe it just comes with a lot of time. I have trouble memorizing commands and which tools are already there. I'm actually reading the evi nemeth handbook and trying to memorize everything in it is a real chore. How do all of the pros do this?

submitted by /u/Ih8databases
[link] [comments]

SysAdmin challenges/exercices

$
0
0

Hi there, I'm a SysAdmin newbie. I made a website last year and learned how to host it and manage it, while learning about an entirely new OS.

I do enjoy system administration and I would like to learn a lot more about it! As I learn best by doing, instead of reading or watching, I was wondering if you could suggest me a small list of all kind of SysAdmin challenges sorted by difficulty. I need ideas of new things to try and experience. For example, I thought about setting up my own mail server, etc.

What else could I do? I'm eager to learn! :)

Thanks in advance!

submitted by /u/SlimyTinyPanda
[link] [comments]

What are your top 10 CLI programs you use?

$
0
0

Every so often, I like to see what my top 10 CLI programs are. Here are mine.

me@myworkstation:~$ echo $HISTFILESIZE 40000 me@myworkstation:~$ awk '{print $1}' < .bash_history | sort | uniq -c | sort -rn | head -10 6827 git 4365 vi 4141 cd 4138 ssh 3287 ls 1493 kitchen 1352 host 950 knife 927 sudo 687 ldapsearch 

Not surprisingly, IAM is one of my main responsibilities at work & I've been doing a lot of Chef development lately.

How about you? What are your top 10?

submitted by /u/TriumphRid3r
[link] [comments]

Just started playing around with Docker; Mind is blown..

$
0
0

I know the concept of containers have been around for awhile, but I finally decided to start learning how to use Docker by going through the O'Reilly book.

First few pages, it tells me to run 'docker run debian echo "Hello World". So I'm like cool, but after running it nothing really happened. Keep in mind, I'm kind of a noob so I was just sitting there like, "What was the point of that?".

But something struck me, I ran "docker run debian ls /" and the output of the filesystem had a different result from my native filesystem.

Here's where my mind was blown, I thought, "Holy shit, did I just basically create a debian environment inside my CentOS-running laptop in the blink of an eye?".

Had to get up and walk away from my desk to process what happened. Sorry, if I sound like a noob, just thought this shit was super cool. I know it's a lightweight stripped-down version of Debian, but still, I thought it was pretty crazy.

I guess this leads me to my next question...Are containers supposed to replace hypervisors and VM's?

submitted by /u/anacondapoint6
[link] [comments]

Certs

High performance bond for iscsi

$
0
0

Is there anybody who did some tuning in setting up bond on a iscsi network? I’m looking at the xmit_hash_policy parameter in particular

submitted by /u/vdveldet
[link] [comments]

how can i find RDS user login details with time, date and IP address

Linux Fax Server?

$
0
0

After many years of faithful service, the fax/printer/copier/scanner for our small office has given its last. I have a spare copier/printer/scanner that I can use in its place, but that leaves me without the ability to send/receive faxes.

Given that fax is a dying technology, I am reluctant to buy another machine. I still have a external fax modem left that I used many years ago to run a Hylafax fax server. I went to hylafax.org with the idea of building another fax server, but it looks like the last update was back in 2005.

Does anyone have any recent experience with building a fax server?

submitted by /u/MR2Rick
[link] [comments]

RHEL Satellite Content View Version Retention Policy?

$
0
0

Hello all,

We are going to be migrating our Satellite server from a RHEL 6 box to a RHEL 7 box, and before the move, I want to clean up as much as I can.

I was looking at our Content Views, and we have 5+ old published versions for each. What would you take into consideration when determining how long to save those versions? Or is it not worth it to delete them at all; should I just keep them all?

submitted by /u/infrascripting
[link] [comments]

What's the point of running Containers on VM's? Can Containers and Hypervisors coexist?

$
0
0

Keep seeing and hearing about containers and docker and Kubernetes and all that other shit.

What I don't understand is, how can VM's and containers coexist?

Every visual representation I see of containers being matched up against VM's, the container side has no layer of virtualization and touts it as it's main advantage.

https://blog.docker.com/2016/04/containers-and-vms-together/

But then I read stuff like this, and it shows them coexisting. But what's the point of this? Running multiple containerized applications on a single Virtual Machine, is that even desirable or something to pursue?

I just want to understand all this more

submitted by /u/anacondapoint6
[link] [comments]

NIS to FreeIPA - Would everything be okay if I keep UIDs and GIDs the same?

$
0
0

Hi. I inherited a research cluster where all compute nodes are linux and infrastructure services(DNS, DHCP, pf, NIS) are provided by FreeBSD. So, the current authentication system is NIS. There are about 100 users who has data on a shared NetApp NFS storage.

For reasons out of scope of this question, we need to move authentication to FreeIPA.

I'm thinking that I can stand up a FreeIPA server, and create new users and groups with the same corresponding UIDs and GIDs from NIS. If I do so, would the new FreeIPA authenticated users be able to read their old files without trouble?

I will try it with a spare node before big cluster change, but wanted to check here if there huge holes in my idea..

My game plan is : bring the cluster down during scheduled maintenance, make the compute nodes FreeIPA clients, and reboot.

Any references to read or suggestions based on "How would I do it if it were me" would be very helpful to me.

submitted by /u/reacharavindh
[link] [comments]

SSH Through a Jumpbox to a Protected Server - the Easy Way

Initial Benchmarks Of The Performance Impact Resulting From Linux's x86 Security Changes

Ubuntu 16.04.3 LTS and Meltdown

$
0
0

Does anyone know when/if Ubuntu is issuing a security patch for meltdown ? At what point should I just give up waiting and go find a new distro ?

submitted by /u/greywolfau
[link] [comments]

Accessing user files across multiple servers

$
0
0

Hi gang,

I have server A. It receives emails and sftp from users adam, brain, carol and diane. Each user has an email account - adam@example.com - and an sftp account. These user accounts only receive email and accept sftp (details below).

Server A, upon receipt of an email (in, say, /home/adam/mail/new/), via a perl routine opens the emails and puts the attachments (say in /home/adam/email_processing/). If user diane sftps a file (to say, chrooted /home/diane/datadir/), the same routine grabs the file from there and moves it (to say /home/diane/ftp_processing).

It also logs the arrival of the email and (some work to be done here) logs the arrival of the ftp file. Let's call all of the above the "receiving" stage.

Another process then checks the processing folders - /home/../email_processing and /home/../ftp_processing/ - and works out what to do with the files and writes the details of the actons into a database. Let's call this the "pre-processing" stage.

Then another process reads the database actions and carries them out - in reality, cracks the files open and sucks the data out of them. let's call this the "processing" stage

Once they're processed, the files get moved to /home/../processed_files/. Eventually they're archived. The "archiving" stage.

The users, once the file is delivered to server A, have no contact with the files. Everything after the delivery stage is handled by user root.

I'm thinking that there should be a server B that does the pre-processing bit and stores the processing database. Not particularly powerful. it could also handle the archiving stage. And a server C that does the actual processing - the most power of all (files are all excel/tab delimited/csv/databasy kinds of files).

All servers are on someone else's hardware - probably Digital Ocean.

Now here's the question. How should I allow each server to access the files? NFS share? NFS via private networking? rsync the files from server to server? Something else?

Any ideas gratefully accepted.

Cheers,

---=L

submitted by /u/Laurielounge
[link] [comments]

Simple tool to benchmark our servers before/after patching for Intel issue?

$
0
0

Can anyone recommend a simple tool out there which allows you to benchmark CPU, memory, I/O etc? Ideally something we would just be able to install on the server and run to gather the info, since we won't have a lot of time between now and when we patch. Thanks!

submitted by /u/gingergringo_
[link] [comments]

Guide to a better looking Linux lockscreen

How many Servers do you have?

Suggestions for SMB setups?

$
0
0

Hi guys,

I'm currently the sysadmin for several startups and small companies.

What I've been doing in the past is using OpenMediaVault which is basically Debian as a base for my customers' services. They can create users with self-service (basically, the CEO logs into the web interface and creates a user when a new employee arrives) and I use the plugins for rsnapshot and OpenVPN.

Other than that, I usually just spin up Docker containers on the host and install other services there via Dockerfile, e.g. Nextcloud, Syncthing, or Odoo.

In the past, this system has lead to fragmentation across the different server systems. For example, I can't yet migrate some clients from Debian jessie to stretch due to compatibility reasons and other clients already have a stretch server because of new features needed.

What I have been trying to establish is that I create a wiki page for every customer outlining the configuration on said hosts (e.g. "Nextcloud installed in Docker, dnsmasq configured to block hosts from blocklists") and additionally use Ansible for anything that can be managed remotely.

Sadly the clients' network setups differ sometimes and while I have been mostly standardising the network (router, switches, DNS) setups, I don't really think I have been standardising the server setups enough.

What would you change in this constellation and why? Keep in mind that these clients don't have a lot of money, so please don't suggest an expensive server solution (still, paid software is okay, just not more than ~$20 per user per month, these companies usually have only 2-20 employees).

submitted by /u/butterfs
[link] [comments]

Help! Can't mount this windows share from my Linux server?

$
0
0

Why does connecting to a damn windows server have to be such a pain in the ass in Linux?

I'm running a centos 7 box in a VM. I'm trying to connect to a main file share to copy files to a nextcloud skeleton folder to setup default files for a new account. SMBv1 is disabled on all of our servers. When I run the below command it keeps asking me for the password for root@\servershare\sharename. What the hell am I doing wrong?

sudo mount.cifs \\servername\sharename /mnt/windows/ -o username=myname,domain=mydomain,password=mypassword -o vers=2.0

submitted by /u/k1ng0fn3rds
[link] [comments]
Viewing all 17871 articles
Browse latest View live