Feed aggregator

Joey Hess: it's a bird, it's a plane, it's a super monoid for propellor

Planet Debian - Sat, 17/10/2015 - 20:43

I've been doing a little bit of dynamically typed programming in Haskell, to improve Propellor's Info type. The result is kind of interesting in a scary way.

Info started out as a big record type, containing all the different sorts of metadata that Propellor needed to keep track of. Host IP addresses, DNS entries, ssh public keys, docker image configuration parameters... This got quite out of hand. Info needed to have its hands in everything, even types that should have been private to their module.

To fix that, recent versions of Propellor let a single Info contain many different types of values. Look at it one way and it contains DNS entries; look at it another way and it contains ssh public keys, etc.

As an émigré from lands where you can never know what type of value is in a $foo until you look, this was a scary prospect at first, but I found it's possible to have the benefits of dynamic types and the safety of static types too.

The key to doing it is Data.Dynamic. Thanks to Joachim Breitner for suggesting I could use it here. What I arrived at is this type (slightly simplified):

newtype Info = Info [Dynamic] deriving (Monoid)

So Info is a monoid, and it holds of a bunch of dynamic values, which could each be of any type at all. Eep!

So far, this is utterly scary to me. To tame it, the Info constructor is not exported, and so the only way to create an Info is to start with mempty and use this function:

addInfo :: (IsInfo v, Monoid v) => Info -> v -> Info addInfo (Info l) v = Info (toDyn v : l)

The important part of that is that only allows adding values that are in the IsInfo type class. That prevents the foot shooting associated with dynamic types, by only allowing use of types that make sense as Info. Otherwise arbitrary Strings etc could be passed to addInfo by accident, and all get concated together, and that would be a total dynamic programming mess.

Anything you can add into an Info, you can get back out:

getInfo :: (IsInfo v, Monoid v) => Info -> v getInfo (Info l) = mconcat (mapMaybe fromDynamic (reverse l))

Only monoids can be stored in Info, so if you ask for a type that an Info doesn't contain, you'll get back mempty.

Crucially, IsInfo is an open type class. Any module in Propellor can make a new data type and make it an instance of IsInfo, and then that new data type can be stored in the Info of a Property, and any Host that uses the Property will have that added to its Info, available for later introspection.

For example, this weekend I'm extending Propellor to have controllers: Hosts that are responsible for running Propellor on some other hosts. Useful if you want to run propellor once and have it update the configuration of an entire network of hosts.

There can be whole chains of controllers controlling other controllers etc. The problem is, what if host foo has the property controllerFor bar and host bar has the property controllerFor foo? I want to avoid a loop of foo running Propellor on bar, running Propellor on foo, ...

To detect such loops, each Host's Info should contain a list of the Hosts it's controlling. Which is not hard to accomplish:

newtype Controlling = Controlled [Host] deriving (Typeable, Monoid) isControlledBy :: Host -> Controlling -> Bool h `isControlledBy` (Controlled hs) = any (== hostName h) (map hostName hs) instance IsInfo Controlling where propigateInfo _ = True mkControllingInfo :: Host -> Info mkControllingInfo controlled = addInfo mempty (Controlled [controlled]) getControlledBy :: Host -> Controlling getControlledBy = getInfo . hostInfo isControllerLoop :: Host -> Host -> Bool isControllerLoop controller controlled = go S.empty controlled where go checked h | controller `isControlledBy` c = True -- avoid checking loops that have been checked before | hostName h `S.member` checked = False | otherwise = any (go (S.insert (hostName h) checked)) l where c@(Controlled l) = getControlledBy h

This is all internal to the module that needs it; the rest of propellor doesn't need to know that the Info is using used for this. And yet, the necessary information about Hosts is gathered as propellor runs.

So, that's a useful technique. I do wonder if I could somehow make addInfo combine together values in the list that have the same type; as it is the list can get long. And, to show Info, the best I could do was this:

instance Show Info where show (Info l) = "Info " ++ show (map dynTypeRep l)

The resulting long list of the types of vales stored in a host's info is not a useful as it could be. Of course, getInfo can be used to get any particular type of value:

*Main> hostInfo kite Info [InfoVal System,PrivInfo,PrivInfo,Controlling,DnsInfo,DnsInfo,DnsInfo,AliasesInfo, ... *Main> getInfo (hostInfo kite) :: AliasesInfo AliasesInfo (fromList ["downloads.kitenet.net","git.joeyh.name","imap.kitenet.net","nntp.olduse.net" ...

And finally, I keep trying to think of a better name than "Info".

Categories: Elsewhere

Iustin Pop: Server upgrades and monitoring

Planet Debian - Sat, 17/10/2015 - 19:18

Undecided whether the title should be "exercises in Yak shaving" or "paying back technical debt" or "too much complexity for personal systems". Anyway…

I started hosting my personal website and some other small stuff on a dedicated box (rented from a provider) in early 2008. Even for a relatively cheap box, it worked without issues for a good number of years. A surprising number of years, actually; the only issue was a power supply failure that was solved by the provider automatically and then nothing for many years. Even the harddrive (mechanical) had no issues at all for 7 years (Power_On_Hours: 64380; I probably got it after it had a few months of uptime). I believe it was the longest running harddrive I've ever used (for the record: Seagate Barracuda 7200.10, ST3250310AS).

The reason I delayed upgrade for a long time was twofold: first, at the same provider I couldn't get a similar SLA for the same amount of money. I could get better hardware, but with worse SLA and options. This is easily solvable, of course, by just finding a different provider.

The other issue was that I never bothered to setup proper configuration management for the host; after all, it was only supposed to run Apache with ikiwiki and some other trivial small other things. The truth was that over time it started pilling up more and more "small things"… so actually changing the host is expensive.

As the age of the server neared 7 years, I thought to combine upgrade from Wheezy to Jessie with a HW upgrade. Managed to find a different provider that had my desired SLA and HW configuration, got the server and the only thing left was to do the migration.

Previous OS upgrades were simple as they were on the same host; i.e. rely on Debian's reliable upgrade and nothing else to, eventually adjust slightly some configs. With a cross-host upgrade (I couldn't just copy the old OS since it was also a 32-to-64 bit change) it's much worse: since there's no previous installation, I had to manually check and port the old configuration for each individual service. This got very tedious, and I realised I have to make it somehow better.

"Proper" configuration management aside, I thought that I need proper monitoring first. I already had (for a long while actually) graphing via Munin, but no actual monitoring. Since the host only had few services, this was again supposed to be easy - same mistake again.

The problem is that once you have any monitoring system setup, it's very easy to actually add "just one more" host or service to it. First it was only the external box, then it was my firewall, then it was the rest of my home network. Then it was the cloud services that I use—for example, checking whether my domain registrar's nameservers still are authoritative for my domain or whether the expiration date it still far in the future. And so on…

In the end, what was in previous iterations (e.g. Squeeze to Wheezy upgrade) a half-weekend job only, spread out over many weekends (interleaved with other activities, not fully working on it). I had to keep the old machine running for a month more in order to make sure everything was up and running, and I ended up with 80 services monitored across multiple systems; the migrated machine itself has almost half of these. Some of these are light items (e.g. a checking that a single vhost responds) other are aggregates. I still need to add some more checks though, especially more complex (end-to-end) ones.

The lesson I learned in all this is that, with or without configuration management in place, having monitoring makes it much easier do to host or service moves, as you know much better when everything is done whether it's "done-done" or just "almost done".

The question that remains though: with 80 services for a home network plus external systems (personal use); I'm not sure if I'm doing things right (monitor the stuff I need) or wrong (do I really need these many things)?

Categories: Elsewhere

ParvindSharma: Father of Drupal - Dries Buytaert

Planet Drupal - Sat, 17/10/2015 - 13:08

Hello Guys,

Today, I am talking about a person who has uid =1 on drupal.org website, the person who is creator of Drupal and known as father of Drupal, the person who is behind one out of each 35 websites. Yes, I am talking about "Dries Buytaert". 

Dries Buytaert is the original creator and project lead for the Drupal open source web publishing and collaboration platform. Buytaert serves as president of the Drupal Association, a non-profit organization formed to help Drupal flourish. He is also co-founder and chief technology officer of Acquia, a venture-backed software company that offers products and services for Drupal. Dries is also a co-founder of Mollom, a web service that helps you identify content quality and, more importantly, helps you stop website spam. A native of Belgium, Buytaert holds a PhD in computer science and engineering from Ghent University and a Licentiate Computer Science (MsC) from the University of Antwerp. In 2008, Buytaert was elected Young Entrepreneurs of Tech by BusinessWeek as well as MIT TR 35 Young Innovator. In 2012, Ernst & Young gave Buytaert the Entrepreneur Of The Year Award for New England.

In 2000, when he was studying at the University of Antwerp for his undergraduate degree in computer science, he developed a small internal website to be used by the eight students who shared an ADSL connection in their dorms via a wireless connection--which was pretty rare at the time.

After they graduated, the group decided to move the site online so they could keep in touch. The domain name "drop.org" was available, and Dries chose to use it because the word "drop" in Dutch means "village".

For the first few years, Drupal was mostly a place to experiment with web development technologies like moderation and syndication. A community quickly began to build around it, and Drupal was born.

Sources - http://buytaert.net/ , https://www.linkedin.com/in/buytaerthttps://www.drupal.org/u/drieshttp://www.northstudio.com/blog/meet-father-drupal-dries-buytaert

Tags: DrupalEntrepreneurDries BuytaertDrupal Planet
Categories: Elsewhere

Russell Coker: Mail Server Training

Planet Debian - Sat, 17/10/2015 - 11:08

Today I ran a hands-on training session on configuring a MTA with Postfix and Dovecot for LUV. I gave each student a virtual machine running Debian/Jessie with full Internet access and instructions on how to configure it as a basic mail server. Here is a slightly modified set of instructions that anyone can do on their own system.

Today I learned that documentation that includes passwords on a command-line should have quotes around the password, one student used a semi-colon character in his password which caused some confusion (it’s the command separator character in BASH). I also discovered that trying to just tell users which virtual server to login to is prone to errors, in future I’ll print out a list of user-names and passwords for virtual servers and tear off one for each student so there’s no possibility of 2 users logging in to the same system.

I gave each student a sub-domain of unixapropos.com (a zone that I use for various random sysadmin type things). I have changed the instructions to use example.com which is the official address for testing things (or you could use any zone that you use). The test VMs that I setup had a user named “auser”, the documentation assumes this account name. You could change “auser” to something else if you wish.

Below are all the instructions for anyone who wants to try it at home or setup virtual machines and run their own training session.

Basic MTA Configuration
  1. Run “apt-get install postfix” to install Postfix, select “Internet Site” for the type of mail configuration and enter the domain name you selected for the mail name.
  2. The main Postfix configuration file is /etc/postfix/main.cf. Change the myhostname setting to the fully qualified name of the system, something like mta.example.com.
    You can edit /etc/postfix/main.cf with vi (or any other editor) or use the postconf command to change it, eg “postconf -e myhostname=mta.example.com“.
  3. Add “home_mailbox=Maildir/” to the Postfix configuration to make it deliver to a Maildir spool in the user’s home directory.
  4. Restart Postfix to apply the changes.
  5. Run “apt-get install swaks libnet-ssleay-perl” to install swaks (a SMTP test tool).
  6. Test delivery by running the command “swaks -f auser@example.com -t auser@example.com -s localhost“. Note that swaks displays the SMTP data so you can see exactly what happens and if something goes wrong you will see everything about the error.
  7. Inspect /var/log/mail.log to see the messages about the delivery. View the message which is in ~auser/Maildir/new.
  8. When other students get to this stage run the same swaks command but with the -t changed to the address in their domain, check the mail.log to see that the messages were transferred and view the mail with less to see the received lines. If you do this on your own specify a recipient address that’s a regular email address of yours (EG a Gmail account).
Basic Pop/IMAP Configuration
  1. Run “apt-get install dovecot-pop3d dovecot-imapd” to install Dovecot POP and IMAP servers.
    Run “netstat -tln” to see the ports that have daemons listening on them, observe that ports 110 and 143 are in use.
  2. Edit /etc/dovecot/conf.d/10-mail.conf and change mail_location to “maildir:~/Maildir“. Then restart Dovecot.
  3. Run the command “nc localhost 110” to connect to POP, then run the following commands to get capabilities, login, and retrieve mail:
    user auser
    pass WHATEVERYOUMADEIT
    capa
    list
    retr 1
    quit
  4. Run the command “nc localhost 143” to connect to IMAP, then run the following commands to list capabilities, login, and logout:
    a capability
    b login auser WHATEVERYOUMADEIT
    c logout
  5. For the above commands make note of the capabilities, we will refer to that later.

Now you have a basically functional mail server on the Internet!

POP/IMAP Over SSL

To avoid password sniffing we need to use SSL. To do it properly requires obtaining a signed key for a DNS address but we can do the technical work with the “snakeoil” certificate that is generated by Debian.

  1. Edit /etc/dovecot/conf.d/10-ssl.conf and change “ssl = no” to “ssl = required“. Then add the following 2 lines:
    ssl_cert = </etc/ssl/certs/ssl-cert-snakeoil.pem
    ssl_key = </etc/ssl/private/ssl-cert-snakeoil.key
    1. Run “netstat -tln” and note that ports 993 and 995 are not in use.
    2. Edit /etc/dovecot/conf.d/10-master.conf and uncomment the following lines:
      port = 993
      ssl = yes
      port = 995
      ssl = yes
    3. Restart Dovecot, run “netstat -tln” and note that ports 993 and 995 are in use.
  2. Run “nc localhost 110” and “nc localhost 143” as before, note that the capabilities have changed to include STLS/STARTTLS respectively.
  3. Run “gnutls-cli --tofu 127.0.0.1 -p 993” to connect to the server via IMAPS and “gnutls-cli --tofu 127.0.0.1 -p 995” to connect via POP3S. The --tofu option means to “Trust On First Use”, it stores the public key in ~/.gnutls and checks it the next time you connect. This allows you to safely use a “snakeoil” certificate if all apps can securely get a copy of the key.
Postfix SSL
  1. Edit /etc/postfix/main.cf and add the following 4 lines:
    smtpd_tls_received_header = yes
    smtpd_tls_loglevel = 1
    smtp_tls_loglevel = 1
    smtp_tls_security_level = may

    Then restart Postfix. This makes Postfix log TLS summary messages to syslog and in the Received header. It also permits Postfix to send with TLS.
  2. Run “nc localhost 25” to connect to your SMTP port and then enter the following commands:
    ehlo test
    quit

    Note that the response to the EHLO command includes 250-STARTTLS, this is because Postfix was configured with the Snakeoil certificate by default.
  3. Run “gnutls-cli --tofu 127.0.0.1 -p 25 -s” and enter the following commands:
    ehlo test
    starttls
    ^D

    After the CTRL-D gnutls-cli will establish a SSL connection.
  4. Run “swaks -tls -f auser@example.com -t auser@example.com -s localhost” to send a message with SSL encryption. Note that swaks doesn’t verify the key.
  5. Try using swaks to send messages to other servers with SSL encryption. Gmail is one example of a mail server that supports SSL which can be used, run “swaks -tls -f auser@example.com -t YOURADDRESS@gmail.com” to send TLS (encapsulated SSL) mail to Gmail via swaks. Also run “swaks -tls -f auser@example.com -t YOURADDRESS@gmail.com -s localhost” to send via your new mail server (which should log that it was a TLS connection from swaks and a TLS connection to Gmail).
SASL

SASL is the system of SMTP authentication for mail relaying. It is needed to permit devices without fixed IP addresses to send mail through a server. The easiest way of configuring Postfix SASL is to have Dovecot provide it’s authentication data to Postfix. Among other things if you change Dovecot to authenticate in another way you won’t need to make any matching changes to Postfix.

  1. Run “mkdir -p /var/spool/postfix/var/spool” and “ln -s ../.. /var/spool/postfix/var/spool/postfix“, this allows parts of Postfix to work with the same configuration regardless of whether they are running in a chroot.
  2. Add the following to /etc/postfix/main.cf and restart Postfix:
    smtpd_sasl_auth_enable = yes
    smtpd_sasl_type = dovecot
    smtpd_sasl_path = /var/spool/postfix/private/auth
    broken_sasl_auth_clients = yes
    smtpd_sasl_authenticated_header = yes
  3. Edit /etc/dovecot/conf.d/10-master.conf, uncomment the following lines, and then restart Dovecot:
    unix_listener /var/spool/postfix/private/auth {
    mode = 0666
    }
  4. Edit /etc/postfix/master.cf, uncomment the line for the submission service, and restart Postfix. This makes Postfix listen on port 587 which is allowed through most firewalls.
  5. From another system (IE not the virtual machine you are working on) run “swaks -tls -f auser@example.com -t YOURADDRESS@gmail.com -s YOURSERVER and note that the message is rejected with “Relay access denied“.
  6. Now run “swaks -tls --auth-user auser --auth-password WHATEVER -f auser@example.com -t YOURREALADDRESS -s YOURSERVER” and observe that the mail is delivered (subject to anti-spam measures at the recipient).
  7. Configuring a MUA

    If every part of the previous 3 sections is complete then you should be able to setup your favourite MUA. Use “auser” as the user-name for SMTP and IMAP, mail.example.com for the SMTP/IMAP server and it should just work! Of course you need to use the same DNS server for your MUA to have this just work. But another possibility for testing is to have the MUA talk to the server by IP address not by name.

    Related posts:

    1. Mail Server Security I predict that over the course of the next 10...
    2. I need an LMTP server I am working on a system where a front-end mail...
    3. Moving a Mail Server Nowadays it seems that most serious mail servers (IE mail...
Categories: Elsewhere

Steve Kemp: Robbing Peter to pay Paul, or location spoofing via DNS

Planet Debian - Sat, 17/10/2015 - 10:57

I rarely watched TV online when I was located in the UK, but now I've moved to Finland with appalling local TV choices it has become more common.

The biggest problem with trying to watch BBC's iPlayer, and similar services, is the location restrictions.

Not a huge problem though:

  • Rent a virtual machine.
  • Configure an OpenVPN server on it.
  • Connect from $current-country to it.

The next part is the harder one - making your traffic pass over the VPN. If you were simple you'd just say "Send everything over the VPN". But that would slow down local traffic, so instead you have to use trickery.

My approach was just to run a series of routing additions, similar to this (except I did it in the openvpn configuration, via pushed-routes):

ip -4 route add .... dev tun0

This works, but it is a pain as you have to add more and more routes. The simpler solution which I switched to after a while was just configuring mitmproxy on the remote OpenVPN end-point, and then configuring that in the browser. With that in use all your traffic goes over the VPN link, if you enable the proxy in your browser, but nothing else will.

I've got a network device on-order, which will let me watch netflix, etc, from my TV, and I'm lead to believe this won't let you setup proxies, or similar, to avoid region-bypass.

It occurs to me that I can configure my router to give out bogus DNS responses - if the device asks for "iplayer.bbc.com" it can return 10.10.10.10 - which is the remote host running the proxy.

I imagine this will be nice and simple, and thought I was being clever:

  • Remote OpenVPN server.
  • MITM proxy on remote VPN-host
    • Which is basically a transparent HTTP/HTTPS proxy.
  • Route traffic to it via DNS.
    • e.g. For any DNS request, if it ends in .co.uk return 10.10.10.10.

Because I can handle DNS-magic on the router I can essentially spoof my location for all the devices on the internal LAN, which is a good thing.

Anyway I was reasonably pleased with the idea of using DNS to route traffic over the VPN, in combination with a transparent proxy. I was even going to blog about it, and say "Hey! This is a cool idea I've never heard of before".

Instead I did a quick google(.fi) and discovered that there are companies offering this as a service. They don't mention the proxying bit, but it's clearly what they're doing - for example OverPlay's SmartDNS.

So in conclusion I can keep my current setup, or I can use the income I receive from DNS hosting to pay for SmartDNS, or other DNS-based location-fakers.

Regardless. DNS. VPN. Good combination. Try it if you get bored.

Categories: Elsewhere

Drupal core announcements: Drupal core security release window on Wednesday, October 21

Planet Drupal - Sat, 17/10/2015 - 06:16
Start:  2015-10-21 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, October 21.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix/feature release on this date; the next window for a Drupal core bug fix/feature release is Wednesday, November 4.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: Elsewhere

Pages

Subscribe to jfhovinne aggregator