Green Thinking
  • About Me
  • Contact
  • Posts
September 18, 2013

Increasing VMWare Disk Sizes in Linux Without Rebooting

You can increase disk sizes in vmware guests whilst they are running. This is supported, but Linux will not see the new size of the disk until it reboots. There is a way around this (assuming here we are using LVM disks): 1. Increase the disk size in the vmware settings. 2. Logon as root onto the Linux guest system. 3. Do:

echo "1" > /sys/class/scsi\_device/<device>/device/rescan

..where is the SCSI bus you wish to rescan. This will take a while. 4. Increase the physical volume size by doing this:

More!
September 9, 2013

Security Breaches From The Sands of Time

I found some interesting old news, back from 1999 that someone posted a link to in the SecurityNow newsgroups. I’ve recently started listening to this podcast - it’s a brilliant way to keep up with computer security news, and I feel a lot more informed having started to listen. http://www.heise.de/tp/artikel/5/5263/1.html http://www.heise.de/tp/artikel/2/2898/1.html The articles were to do with NSA back doors in several pieces of software, Microsoft Windows and Lotus Notes. Both of these were verified back in 2009 by security researchers by reverse engineering software. They traced inbuilt keys to the NSA, by virtue of the fact that they were called by the stealthy name, ‘NSAKEY’. This came out of some debugging symbols mistakenly left in Service Pack 5 for Windows NT. To some, this might be extremely old news (well, it was 14 years ago). However, it does show that at least then, Microsoft and Lotus (now owned by IBM) were willing and able to install backdoors, for the NSA to snoop on their customers. If they were willing and able then - why not now? So, the chances of there actually being backdoors in Windows and Notes today, given the revelations last week - I’d say are pretty high. Microsoft have had years to develop a reputation for poor security in their products, and have been desperately trying to regain people’s trust since the bad old days. I wonder if the coming revelations from the Snowden files may set them back again in winning their customer’s trust.

More!
September 6, 2013

Engineering Around The Privacy Crisis

Is it a crisis? The latest news from the NSA snooping debacle suggests it is.  If they have the means to deliberately insert vulnerabilities into well known encryption standards and circumvent others, then what were previously thought to be secure connections, to banks, email providers and search engines, may not be anymore. Bruce Schneier issued somewhat of a call to arms yesterday, asking the engineers to look at how to resolve these problems, and reegineer the internet to our own needs once again, rather than those of some faceless security services personnel, somewhere.  I am not at all reassured by reports that the NSA only spied on their exs a few times using these powerful technologies they have at their disposal. This got me thinking about areas of trust , which really are based on the word of large companies.  One such area is SSL, which has long been criticised for it’s reliance on central certificate authorities as the purveyors of trust and identity.  When getting an SSL certificate for your server, if you want it to be correctly recognised by web browsers, you must have your certificate issued via a root authority, such as Symantec, Comodo, or Globalsign, or the reseller of these.  If I was the NSA, I’d try to get my own access to root certificates, so I could issue man-in-the-middle attacks on encrypted websites.  That’s not withstanding problems already reported in the past, with issue of root certificates to untrusted third parties. Although an end user may see that the website is secure, there is presently no standard validation procedure to ensure the certificate you are receiving is the one you would expect to be receiving.  This has been well publicised, and there’s been several cases of commercial companies using it to their advantage - notably, nokia in their mobile web browsers. What this essentially means is that it isn’t that technically difficult to trick a user into submitting their ‘secure’ traffic via your proxy.  All the traffic will be encrypted - until it gets to your proxy - then you read it all, and forward it on to the actual website the user was trying to access.  They believe they are accessing the website directly, but in reality, it’s all being decrypted by some third-party on the way. There are several ways to defeat this - one is the extended validation (EV) certificate that some companies, notably banks, often use.  These certificates can’t be spoofed - so you know if you are seeing a padlock in green, the browser is verifying it in a separate way.  These are well and good, but most sites do not use them, and again, they are only as secure as the keys embedded in the browser.  The green bar is also worthless for Internet Explorer, which has a way to add your own EV certificates, for ‘convenience’.  I think the engineers at Microsoft sort of missed the point of these certificates entirely. A more promising solution is the DANE standard, a co-technology of DNSSEC.  DANE allows the fingerprint of an SSL certificate to be entered as a DNS record.  Your browser can then verify that the certificate you are receiving, is the one the site owner intended you to receive, and not issued by a third party in transit. This standard sounds great, but as yet, it’s not supported by browsers.  There are some extensions to allow people to use it - but the average user certainly isn’t going to do that.  DNSSEC rollout has been slow, and most people’s domains do not yet have the keys needed to verify the validity of the DNS records either. This is promising though - technologies are already there to improve the integrity of the internet, it’s just a case of using them.  And there’s nothing like a major security scare to push people to start implementing more secure means of communications. If the NSA want to get into your computer, they probably can.  But that’s not what we are trying to prevent really - it’s the casual snooping of data, from anyone and everyone, just because they can, which is the problem.  No warrants, no court orders, just riffling through your underwear, without anyone’s permission. I can see in the next few months, more revelations coming out.  I am already eyeing up my android phone with suspicion - it would be easy enough for the NSA or GCHQ to write nefarious code into the operating system to track people’s locations, turn on the microphones or cameras, or record calls and texts they sent.  We already know they get co-operation from Google, so why not?  Indeed, it was already revealed that Apple was tracking user locations in an iPhone cache file - now it seems to me that this could have been one of the helpful security issues NSA would be happy to exploit. The problem is - companies that we trusted to be acting in their customer’s best interests, have now been revealed not to have been.  They often seemed to prefer the approval of the NSA, than of their own customers.  If that isn’t a privacy crisis, then I don’t know what is.

More!
September 4, 2013

Working With Haproxy

Old Article Comments

I exported these from my old wordpress blog, so they are a bit out of date, but I thought I’d keep them around for posterity.

[Andy Dorfman] - Apr 5, 2014 Excellent article. Do you think this setup will work to load balance/failover several nonclustered weblogic instances. In my current setup, i have apache rp listening on port 443 with ssl termination and forwarding everything to weblogics, listening on port 80. Unfortunately the failover is not very smooth. HAProxy sounds like a more robust solution. Can this be accomplished with HAproxy?

[chrisgilbert42] - Apr 5, 2014 Hi Andy. Yes, HAproxy will work in front of a couple of standalone managed servers. In the case of a non-clustered environment though, you might find the problem is to do with the session failover - that won’t work without clustering features enabled. If you aren’t worried about session failover, and have a mostly public web application, then it should work fine. Say you are running two standalone weblogic servers, and you don’t care about session state, then HAproxy can do a decent job of detecting when one is down, and straight away routing traffic to the other. It’s actually a general purpose proxy, it works with any protocol, not just HTTP. If users do have sessions though, then they will have to log in again on the second node. That’s always going to be the case without a cluster though, as the Java servlets hold the session state, and if you don’t replicate them, then the user will lose anything in the session when they switch nodes (this could be shopping cart contents, etc). However, you might also want to look at the Weblogic plugins for Apache, which could help you if you are not already using them. These help route requests to only active managed servers, and have the added advantage that they can automatically resubmit a failed request to another managed server. I have not tried this set up without a cluster, but I think it possibly works OK. Chris

More!
September 4, 2013

Working With Haproxy

Although I have worked with enterprise envrionments running Oracle and SQL Server for quite a few years, I’ve yet to be involved in a real high-availability deployment.  This has been for a variety of technical reasons in our company’s application, and a lack of interest from most customers. Recently, I explored options and had the opportunity to test some load balancing setups for our application servers at a customer site.  I was impressed with the reliability and reputation of the HAproxy software, so had a look at that to begin with.  In the past this software has been unable to terminate SSL connections, and had to deploy help from other applications in order to achieve that.  The latest versions have SSL support though, and aside from a bit of compiling from source, are easy to install. Here’s how I did it, on Centos 6.4.

More!
  • ««
  • «
  • 1
  • 2
  • 3
  • 4
  • 5
  • »
  • »»
© Green Thinking 2026