Quantcast
Channel: linuxadmin: Expanding Linux SysAdmin knowledge
Viewing all 17891 articles
Browse latest View live

AppArmor or SELinux for desktop system?

$
0
0

Arch Linux is my first Linux distro and over time, I've come to realized that an inherent weakness of the distro is the lack of a complete Mandatory Access Control implementation that is both somewhat easy to use and doesn't require constant maintenance. Given that SELinux is more powerful than AppArmor in theory but is also more difficult to use and that there is no good reference policy for Arch, it seems that AppArmor is the only practical MAC (of the two, I'm aware there are others but I figured it's best to stick with one of these because they are the most popular and used on enterprise systems--SELinux by Fedora and AppArmor by openSUSE).

Therefore, I'm trying to decide whether to stick with Arch and use AppArmor or switch to Fedora and use its SELinux. Regardless of the distro I use, I plan to also use the grsecurity kernel because a kernel exploit can bypass any MAC implementation, based on what I've read.

Perhaps it's best to ask a few questions to guide the discussion:

  • Is AppArmor adequate for someone who wants a reasonably hardened distro for desktop use? I've been told that it's ridiculously easy to bypass by some yet some others claim that you can get 95% of what a reasonable policy in SELinux can get but with 10% the work to put in it. I'm talking about real-life scenarios and practical uses, not what you can get in theory (in which case I think everyone would agree that SELinux offers better protection if you have unlimited time and resources to configure policies for your particular system).

  • Is Fedora with SELinux out-of-the-box secure? People say SELinux is difficult or cumbersome to use/maintain, but is this the case for the average desktop user?

  • Is MAC on desktop systems even essential in the first place?

I don't mind the initial time to get policies working and everything setup to avoid breakage and whatnot, but I don't want to encounter random breakages nor do I want to invest significant time to fix things. I don't mind quickly tweaking policies on occasion though. I don't even know if this is necessary if I can just use existing policies for the applications on my system.

submitted by /u/gregorie12
[link] [comments]

Help with time and conditionals

$
0
0

Hello Linux admins! I have a small script that uses openssl to test our websites for their ssl certificates and how many days till expiration. However, I'm trying to automate the process so it emails me when it approaches 30 days. I'm fairly new to Bash and what I have is basically Frankenstein's child with Google, but it works and does what I need to.. Any help in this matter will be a valuable learning experience! Thank you in advanced..

--dates) opt="-dates" FormatOutput() { dates=$(cat -) start=$(grep Before <<<"$dates" | cut -d= -f2-) end=$(grep After <<<"$dates" | cut -d= -f2-) echo valid from: $(date -d "$start" '+%F %T %Z') echo valid till: $(date -d "$end" '+%F %T %Z') d1=$(date -d "$end" +%s) d2=$(date +%s) echo $(( (d1 - d2) / 86400 )) days 

EDIT:

--dates) opt="-dates" FormatOutput() { dates=$(cat -) start=$(grep Before <<<"$dates" | cut -d= -f2-) end=$(grep After <<<"$dates" | cut -d= -f2-) echo valid from: $(date -d "$start" '+%F %T %Z') echo valid till: $(date -d "$end" '+%F %T %Z') #echo valid till: $(date -d "$end" '+%s') - $(date '+%s') d1=$(date -d "$end" +%s) d2=$(date +%s) d3=$(( (d1 - d2) / 86400)) if [ $d3 -lt 250 ]; then echo "less than 30" | exit 1 fi echo $(( (d1 - d2) / 86400 )) 
submitted by /u/iGoogle2
[link] [comments]

Short DNS Record TTL And Centralization Are Serious Risks For The Internet

Kickstart snippet to backup data prior to install?

$
0
0

I'm trying to write a kickstart config for migrating a large set of servers from Gentoo to a Red Hat derivative while keeping it as "touchless" as possible for the installers. The issue is that this is being done to support an application and we want to preserve the app's configuration through the migration, which is kept in a single file. The approach I've taken is to mount the legacy root directory in %pre, copying the config file to /tmp in ramdisk, then copying it back into the new filesystem in %post before chroot. This may not be the best solution.

So it seems like "/" is mounted from /dev/sda4 on all these systems but rather than hard code an assumption I'm looking for a snippet that could identify where "/" is located on the existing disks, whether it's LVM, RAID or simple partitions, much like a rescue disk does. Do you guys have any pointers?

submitted by /u/kurokame
[link] [comments]

RHEL 6.8 LACP Bond Interface Dropping Packets

$
0
0

Greetings,

Hoping to find some help on this issue, this is some really strange behavior that I'm running out of ideas on how to address.

I have a system that acts as a DHCP/TFTP/PXE server that is used to monitor, provision, etc. several servers. We'll call this server Node1, the remaining servers are Node2, Node3, etc.

Node1 is connected to a pair of 1G Ethernet Switches in a stacked configuration via 2x1G cables:

Node1 eth1 <--> sw0 port 1:48

Node1 eth2 <--> sw0 port 2:48

This connection is configured for LACP under the Linux bonding module, version 3.7.1, and has 2 IP addresses assigned to bond0 (one for the 1G network, one for the BMC network that all the slave hosts have):

modinfo bonding
filename: /lib/modules/2.6.32-642.el6.x86_64/kernel/drivers/net/bonding/bonding.ko
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.1

cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
NAME=bond0
BOOTPROTO=none
TYPE=Bond
BONDING_MASTER=yes
ONBOOT=yes
IPADDR=10.0.0.1
NETMASK=255.255.0.0
BONDING_OPTS='mode=802.3ad miimon=100 xmit_hash_policy=layer2+3'

cat /etc/sysconfig/network-scripts/ifcfg-bond0:bmc
DEVICE=bond0:bmc
BOOTPROTO=static
ONBOOT=yes
IPADDR=10.1.0.1
NETMASK=255.255.0.0

The issue I'm having is if I watch eth1 and eth2 in two separate windows with tcpdump, if I try to ping a host, or a host's BMC, if the ARP/DHCP/TFTP/ICMP packet is sourced on eth1, and a reply is returned on eth2, the packet will ignored and considered to be dropped. See this behavior below:

ping 10.1.0.2 PING 10.1.0.2 (10.1.0.2) 56(84) bytes of data.
--- 10.1.0.2 ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 6719ms

Here's the tcpdump's while that is happening:

tcpdump -i eth1 -nn icmp or arp
14:02:51.445712 IP 10.1.0.1 > 10.1.0.2: ICMP echo request, id 56913, seq 1, length 64
14:02:52.445080 IP 10.1.0.1 > 10.1.0.2: ICMP echo request, id 56913, seq 2, length 64
14:02:53.444401 IP 10.1.0.1 > 10.1.0.2: ICMP echo request, id 56913, seq 3, length 64

tcpdump -i eth2 -nn icmp or arp
14:02:51.445921 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 56913, seq 1, length 64
14:02:52.445266 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 56913, seq 2, length 64
14:02:53.444594 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 56913, seq 3, length 64
14:02:54.444602 IP 10.1.0.2 > 10.1.0.1: ICMP echo reply, id 56913, seq 4, length 64

Yet, the ping shows that the packets are dropped.

I have LACP configured on the stacked Ethernet switch. The LACP counters seem to properly synchronize and display up/up on both links. I've also had a RHEL6.7 system with this exact same configuration have no issues at all, so I'm at a complete loss on what could possibly be different here.

If anyone could help, I know it's a long shot, I would greatly appreciate it. I can provide any other details.

Thank you.

submitted by /u/topper_reppot
[link] [comments]

Linux Foundation's Linux Performance Tuning

Setting up a local domain for the first time using FreeIPA. Are there any good guides out there you can recommend for first timers?

$
0
0

I'm looking to set up a domain on my home office network for the first time. Mainly for local DNS, SSO, and certificate/key management but I'm interested in what else an AD-style setup has to offer me. I've done quick repairs on some Windows AD setups in the past but never really tinkered around with one beyond what was required to get a paycheck signed.

I have a VM ready. I've screwed around in the setup menus a bit. But I'd like a bit better explanation about the best practices for each configuration option so I can make sure I'm choosing my domain correctly and properly configuring DNS.

The official quick start guide just shows how to install the package and post-setup management utilities.

The official install and deployment guide is a PDF that is over three release versions out of date. It quickly loses sync with the current installer in the first few steps. For example the second question in the setup is "Enter the hostnames of Kerberos servers in the EXAMPLE.LOCAL Kerberos realm". This is going to be a single server. Should this be localhost? Or just IPA.EXAMPLE.LOCAL? Or just IPA? In the first few pages of Google results I can not find a guide running an installer that presents this question.

Is there some up to date primer for getting a first-time FreeIPA setup working?

submitted by /u/HittingSmoke
[link] [comments]

Issue with Linux authenticating against Active Directory

$
0
0

I'm having an odd issue authenticating my Linux server against Active Directory. I used realmd to do the configuration, so all the configs below are auto-generated. I can get my Kerberos ticket and the server has a machine account in AD.

I am using Oracle Enterprise Linux 7.2 and our AD is 2012R2.

I see two odd things in the logs. SSSD attempts to get a list of users belonging to Domain Users. The LDAP call returns no users in the group. Also, there is a Kerberos error complaining about an invalid UID.

I'm stumped. I've gotten Linux AD authentication working in other unrelated environments. Googling around about the invalid UID hasn't turned up much other than old bug reports.

Log snippets and config files below:

http://pastebin.com/FiPejE5p

submitted by /u/SciScott
[link] [comments]

Fix 502 error on php-fpm service reload

Connection time out when trying to access GUI for my PHP LDAP server

$
0
0

Hello! I'm working in a small lab with an LDAP server that's been running for the last three years. We run Centos 6 here but this particular LDAP server is on CentOS 6.5.

Recently, I've lost the ability to connect to our phpldapadmin GUI which was formerly accessibly by going to a browser and accessing https://<web-name>/phpldapadmin. I'm sure that LDAP is still running because otherwise I wouldn't be able to SSH or log into the cluster. When I do ldapsearch -x on the server, it returns success so I'm sure it's fine for the most part.

The only trouble is I repeatedly get connection timeouts when accessing the phpldapadmin GUI online. So far, I've tried restarting the httpd service which didn't help. I stopped iptables temporarily and tried accessing again but that did not help either. Any clues as to how to get my connection back up?

Any hints would be great. Thanks in advance!

submitted by /u/polkaron
[link] [comments]

UFW - I'm stuck on IP ranges, help!?

$
0
0

Hi,

So I've finally started learning how to setup firewalls on Linux. I'm using UFW because it seems pretty straightforward.

This is currently what I have in my rules:

$ sudo ufw status verbose Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 22 ALLOW IN Anywhere 22 (v6) ALLOW IN Anywhere (v6) 

I'd like to open up some ports for applications running on the server, but I wish to only allow certain IP ranges like this:

192.168.4.xxx - Port 1234 Allow 192.168.xxx.xxx - Port 80 Allow 

These are example ports, but the IP ranges are what I wish to setup. I keep seeing subnets being mentioned but I don't understand how they work with UFW.

From reading docs I know that I can manually allow an IP address to a certain port using this:

sudo ufw allow from 15.15.15.51 to any port 22 

But I don't want to do something as crazy as this just to allow a range of addresses:

sudo ufw allow from 192.168.4.0 to any port 1234 sudo ufw allow from 192.168.4.1 to any port 1234 sudo ufw allow from 192.168.4.2 to any port 1234 sudo ufw allow from 192.168.4.3 to any port 1234 sudo ufw allow from 192.168.4.4 to any port 1234 sudo ufw allow from 192.168.4.5 to any port 1234 sudo ufw allow from 192.168.4.6 to any port 1234 sudo ufw allow from 192.168.4.7 to any port 1234 sudo ufw allow from 192.168.4.8 to any port 1234 sudo ufw allow from 192.168.4.9 to any port 1234 sudo ufw allow from 192.168.4.10 to any port 1234 

There has to be a better way surely...

Edit: Out of curiosity, what if I wanted to allow a range from 192.168.4.0 to 192.168.4.255 but exclude 192.168.4.100 for example?

submitted by /u/SuperImaginativeName
[link] [comments]

My iptables config

$
0
0

I customized this iptables config for my personal server. I am running a teamspeak server, nginx webserver openvpn server and ssh. That is the only things I want outside connections to be allowed to. Is this configuration correct for that?

Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT udp -- anywhere anywhere state NEW udp dpt:openvpn ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- localhost.localdomain anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:mysshport ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:10011 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:30033 ACCEPT udp -- anywhere anywhere state NEW udp dpt:9987 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- 192.168.88.0/24 anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination 
submitted by /u/WanderingBonsaiTree
[link] [comments]

[Salt] Update-ca-trust enable whenever there's a new .pem

$
0
0

So in my module's inti.sls I'm adding a .pem file. I was wondering how can I trigger update-ca-trust enable whenever I make changes in the dir /etc/pki/ca-trust/source/anchors/?

ca-pem.config: file.managed: - name: /etc/pki/ca-trust/source/anchors/ca.pem - source: - salt://CA/ca.pem - user: root - group: root - mode: 644 service.running: - watch: - name: ?????? - reload: True - enable: True - file: /etc/pki/ca-trust/source/anchors/* 
submitted by /u/juniorsysadmin1
[link] [comments]

Trying to prove that pvmove is very safe and best practice.

$
0
0

Hey all,

I've had many situations at work where we need to move a logical volume (using LVM) from one storage device to another, whether that be a VM's (VMware) disks or a physical host's multipath disks (mostly 3par or NETAPP).

I've always recommended to add the new "physical" disk to the volume group, then pvmove the extents over. Afterwords, we reclaim the unwanted physical disk.

I'm often confronted as this being dangerous, like we're moving something without a backup, but, as I know it, pvmove creates a CoW image and copies the data over until it's verified as being perfectly exact from a block level before it deletes the original data from the origin disk.

Now, I'm perfectly experienced with the command sequences. I've tried to corrupt something by doing things like shrinking the filesystem as it's being moved or powering off the server while in the transfer. All of which work perfectly fine. As far as I know pvmove works as well as I could imagine. It just continues where it left off until it's done.

Does anyone happen to have any insight as to if this doesn't scale well? I've tried testing in lab environments where I can't do terabytes of live production database transfers or anything like that.

Any insight from professionals more experienced is appreciated. I'm just trying to get away from doing silly things like making new disks and rsyncing data over, which causes downtime for unmounting and remounting - also, data we'd then be responsible for.

Sorry if this is formatted poorly. I'm not good at formatting on Reddit. Thanks for any answers.

Edit0: Small grammar fixes.

Edit1:

I'm getting a lot of replies saying it's just the best idea to backup before doing anything like this. I'm the biggest proponent for this. Here's how it works where I am. I'm on the team that manages the OS, and it's our policy to not be responsible for data, that whoever owns this data needs to accept this risk. We have dedicated backup engineers, along with a dedicated application team. I'm being told I'm crazy for wanting the backup team or application team to make sure they have a safe copy of working data. So, I'm trying to determine what is the safest way of limiting human error for moving the data reliably. If I'm to trust a random peer around the globe to use rsync, how are they 100% sure they've got a good working copy? If I know pvmove is using copy-on-write and making an exact block-for-block image before the original data is gone, I feel safer knowing everything got there the way it needed to. Pvmove also doesn't require downtime, whereas the rsync method does. The only way I've been able to break pvmove so far is to flat out remove the write disk mid move (I did this on a VM with the vmdk), which would be a scenario like losing 100% of available paths on multipathing.

TL;DR: My team isn't responsible for the data, but we're taking on the risk anyways, so I'm just frustrated that we can't do it the clean way of pvmove and let the backup/application teams worry about the data.

submitted by /u/Kynolin
[link] [comments]

syslog-ng permissions errors. Clean install on clean install of Centos 6

$
0
0

Hello all,

I've recently started to learn how to use syslog-ng with the hopes of eventually using it at work. I felt you'd be the best community to ask about this issue I'm having.

First of all here is the portion of my syslog-ng.conf that I added

I have a fresh install of CentOS and syslog-ng but I've been getting two different types of permissions errors:

Starting syslog-ng: Error binding socket; addr='AF_INET(192.168.1.198:5014)', error='Permission denied (13)' Error initializing source driver; source='s_host'

This happens with any port I've tried save port 514.

Once I get it running on port 514 I don't see any logs or directories getting created. So I run more /var/log messages | tail and see a flood of repeated permissions errors:

Oct 26 12:25:25 localhost syslog-ng[2346]: Error opening file for writing; filename='/opt/log/syslog-ng/FIREWALLS', error='Permission denied (13)'

I can get it to work correctly if I make the parent directory the /tmp directory.

This all seems very weird, since I've installed and run syslog-ng as root.


Below are the steps I've taken in debugging:

  1. Created and verified iptables rules

  2. disabled iptables altogether

  3. verified logs are coming in on the port with nc -ul <port number>

  4. Tried various ports, both below and above 1024. None but 514 have worked.

  5. Tried changing the owner in my file and running as that owner (you can see it commented out)

  6. Tried having logs start on the port before I start the service

  7. Tried having logs start on the port after I start the service.

  8. Checked to see if rsyslog was installed / running (it wasn't)

  9. CHOWN and CHMOD 777 the directories in question just to be sure


I spent several hours butting my head against this yesterday, reading guides and scouring google for similar issues but I can't seem to come across anything. Hopefully you all can help. I'm frustratingly close.

Cool little tool, though!


Edit: Thank you /u/Jimbob0i0 for the selinux tip, that was indeed it. I've never actually run into an issue with it, so I've just disabled it on my lab machine until I have a chance to read about it.

submitted by /u/Paradigm6790
[link] [comments]

Recommendations for send-only email for VPS notifications?

$
0
0

VPS noob here.

What are my options for getting email notifications to work for things like Fail2Ban?

I'd rather not start a full-blown email server if possible because security.

submitted by /u/zebbadee
[link] [comments]

NUT shutdown hierarchy

$
0
0

Hi all,

I have a physical server connected to my UPS, running a NUT master. I have miscellaneous physical systems that all monitor through that master as slaves.

For systems that are hosting virtual machine hosts, is there an easy way to make them a master for the virtual machines that they host, while they are of course still a slave of the Master with the actual UPS connection?

IOW, I want to ensure that when a UPS shutdown event is triggered, that all VMs have shut themselves down before the Host itself shuts down, but the host is not the Master physically connected to the UPS.

submitted by /u/knobbysideup
[link] [comments]

[rpmbuild]installing a directory

$
0
0

In side SPEC/be3-1.0.tar.gz is a directory containing a bunch of directory and files. I just want to know how can i copy the directory inside be3-1.0.tar.gz to a destination directory with the mode and user:group I want. Below is my spec sheet.

Name: be3 Version: 1 Release: 1.10 Summary: Database backup rpm Source0: be3-1.0.tar.gz License: GPL Group: Rahul BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-buildroot %description Write some descripton about your package. %prep %setup -q %build %install mkdir -p $RPM_BUILD_ROOT/etc/be3 cp -R ./ $RPM_BUILD_ROOT/etc/be3 %clean rm -rf $RPM_BUILD_ROOT %post echo . . echo .be3 package installation!. %files %dir /etc/be3 

and when I do rpmbuild -ba be3.spec I get the following:

+ cp -R ./ /root/rpmbuild/BUILDROOT/be3-1-1.10.x86_64/etc/be3 + /usr/lib/rpm/find-debuginfo.sh --strict-build-id -m --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 /root/rpmbuild/BUILD/be3-1 extracting debug info from /root/rpmbuild/BUILDROOT/be3-1-1.10.x86_64/etc/be3/gcoaster_cos6_v11/numpy.random.mtrand.so *** ERROR: No build ID note found in /root/rpmbuild/BUILDROOT/be3-1-1.10.x86_64/etc/be3/gcoaster_cos6_v11/numpy.random.mtrand.so error: Bad exit status from /var/tmp/rpm-tmp.YS25Qk (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.YS25Qk (%install) 
submitted by /u/juniorsysadmin1
[link] [comments]

Is my vps being used for spam

$
0
0

Hi,

I have a small vps. I see multiple mail.log files in /var/log.
The files contain logs like

Oct 24 04:49:27 databox postfix/smtpd[21483]: connect from unknown[155.133.82.202] Oct 24 04:49:27 databox postfix/smtpd[21483]: lost connection after AUTH from unknown[155.133.82.202] Oct 24 04:49:27 databox postfix/smtpd[21483]: disconnect from unknown[155.133.82.202] Oct 24 04:52:47 databox postfix/anvil[21485]: statistics: max connection rate 1/60s for (smtp:155.133.82.202) at Oct 24 04:49:27 Oct 24 04:52:47 databox postfix/anvil[21485]: statistics: max connection count 1 for (smtp:155.133.82.202) at Oct 24 04:49:27 Oct 24 04:52:47 databox postfix/anvil[21485]: statistics: max cache size 1 at Oct 24 04:49:27 Oct 24 04:58:58 databox postfix/smtpd[21604]: warning: hostname static-bbs-74-184-3-210-on-nets.com does not resolve to address 210.3.184.74: Name or service not known 

Thanks

submitted by /u/foxfire_user
[link] [comments]

PERC H200: I/O error with Linux kernel >= 4.4?

$
0
0

Has anybody ever managed to use a PERC H200 raid controller with a kernel version greater than or equal to 4.4?

The system boots up, but I am seeing I/O errors under heavy load on a C5220 and Ubuntu 16.04; dmesg reports "fault_state(0x265d)", and after what seems a controller reset, the device (sda, which is a RAID10) disappears and the filesystem is remounted read-only.

With earlier versions of the kernel, also on Ubuntu 16.04, the dmesg error appears but does not lead to a read-only filesystem.

submitted by /u/s19n
[link] [comments]
Viewing all 17891 articles
Browse latest View live