MB - breaking the mirror might let me get to the data, but might render both disks useless, so I am anxious only to go there when I have exhausted all other options (but I may have already done so).
MG23 - stupidly, I didn't take a record. I used the du -csh /* 2> ~/filelog to find the largest directory and, happy that it was not important to me, deleted it. This last time I deleted one of my own data folders - which I no longer needed - but even that was insufficient to get a proper logon. Still stuffed .......... |
Originally Posted by N727NC
(Post 6064894)
MB - breaking the mirror might let me get to the data, but might render both disks useless, so I am anxious only to go there when I have exhausted all other options (but I may have already done so).
|
If you get it back, can you install the baobab application? This is a friendly GTK+ app for examining disk usage information. I use it on Ubuntu, though the link I have suggests that you'd need to compile it from source. Anything major stands out pretty obviously.
|
I've recovered the machine using a 'failsafe' logon, sufficiently that I can take a backup off the data disks. When I've got the data I need, I'll flatten the server and start again.
Thanks to MB and MG23 - and the others before - for your support. I still have no idea how it is filling several hundred gigabytes of disk in a few weeks. I'm sitting behind a Netgear firewall and the standard SuSe protections are running. Thank you for that hint bnt - I'll certainly install baobab - it will help me to keep an eye on the disk consumption. |
I would also suggest that a simple "lsof" might well be useful. This command lists all open files on a system and you might well be able to identify which file is causing a problem; alternatively use fuser -c filesystem to list all the processes that have files open on filesystem
For example (from a Solaris box) # lsof | more COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sched 0 root cwd VDIR 85,10 512 2 / init 1 root cwd VDIR 85,10 512 2 / init 1 root txt VREG 85,10 48952 1610 /sbin/init init 1 root txt VREG 85,10 41088 4499 /lib/libgen.so.1 init 1 root txt VREG 85,10 51176 4537 /lib/libuutil.so.1 init 1 root txt VREG 85,10 23276 4494 /lib/libdoor.so.1 init 1 root txt VREG 85,10 143744 4526 /lib/libscf.so.1 init 1 root txt VREG 85,10 870760 4509 /lib/libnsl.so.1 init 1 root txt VREG 85,10 51780 4514 /lib/libnvpair.so.1 init 1 root txt VREG 85,10 37400 4528 /lib/libsecdb.so.1 init 1 root txt VREG 85,10 1640776 4480 /lib/libc.so.1 init 1 root txt VREG 85,10 101036 4510 /lib/libmd.so.1 init 1 root txt VREG 85,10 93924 4530 /lib/libsocket.so.1 init 1 root txt VREG 85,10 27100 4483 /lib/libcmd.so.1 <snip> Always look for regular files (VREG above - but Linux could well be different) # fuser -c /var /var: 965o 651o 603c 602o 588o 580co 520o 509o 478o 476o 472o 462c 303co 7o Then use ps -ef | grep pid # ps -ef | grep 580 smmsp 580 1 0 Nov 04 ? 0:04 /usr/lib/sendmail -Ac -q15m No surprise that sendmail is writing to /var :8 BTW is swapd running? |
Getting There
My thanks to all for their constructive help - especially to UNIXMAN for suggesting lsof, which I'll be using when all is back to normal.
Progress to date is that I have rebuilt the OS disk from scratch, but I was unable to persuade the OS to let me rebuild the RAID. However, I managed to mount one of the RAID disks to recover the information - a full backup is running at the moment. When the backup is secure, I'll try to rebuild the RAID without reformatting both disks, in the hope that it will recover the mirror - any clues as to how to best go about this? At the moment one disk is Ext4 - I was obliged to reformat it after the initial OS rebuild, but the other still thinks it is part of a RAID - albeit I have managed to mount both disks as /tmp and /tmp1. The current rig has a 3 GB partition mounted as /SWAP - would swapd be more efficient? (I think I see what you are getting at - might the swap file have grown to fill the data space? Answer is, I think, no - as I only have a fixed swap partition). ......."Weird Trip, Man" (Oora - Edgar Broughton Band, circa 1972). |
I would stick with a fixed swap partition unless there is a very good reason not to.
One thing you don't say is whether you are using software or hardware RAID for your mirroring. |
So Far So Good
The RAID is a linux software raid. I can't find a menu on the Dell to implement a hardware raid, which would have been my preference.
The OS is now running fine, and I am rebuilding the SAMBA server so that the various windoze clients can see their data again. One issue is that because I mounted one of the drives as /tmp, there are lots of processes now using it, so the OS won't let me unmount it! Lesson to be learnt there, methinks. Any ideas on how to get out of this one? |
Not a good idea or practice to mount a disk in the /tmp mount-point.
Start over and use some other syntax. (/temp and /temp2) If you don't start over you're going to have a host of issues. |
Make certain that you haven't got an entry in /etc/fstab for the mounted filesystem; if you have then remove it and revert back to what should be in /tmp... then reboot
|
Originally Posted by N727NC
(Post 6069559)
The RAID is a linux software raid. I can't find a menu on the Dell to implement a hardware raid, which would have been my preference
Usually, Dell provides drivers for use at build time for OSes with the PERCs, and you might find that the PERC BIOS is accessible at POST. CTRL-M? |
All Sorted
Thanks to all for your support - rebuilt machine now running like a dream and no repetition of full disk so far. Also, KDE4 seems to be behaving better than on the earlier build, so I'm guessing something became corrupted along the way. It's always frustrating when you can't identify the problem, but it seems to be behind us now.
Now, about that backup........ |
Linux not so hot?
Or is it just the programmers?
London Stock Exchange halts trading over technical glitch - Telegraph Not the first time since the new system started. |
Apparently it's all due to the migration to Millennium IT a few weeks back. Unlikely to be to do with the underlying OS.
|
vulcanised,
Before you start on your Microsoft fanboy mantras, I suggest you look into the history of Microsoft at the LSE, it's not exactly pretty. :rolleyes: Look, it's simple. A computer is an idiot. It knows nothing. The number of humans and lines of code involved in getting a system at the LSE up and running is probably beyond your comprehension (we're talking everything from the BIOS up to the applications the LSE run and absolutely everything in-between). It is an indisputable fact that due the ever increasing complexity of computing itself, plus the complex environment of the LSE, that the s**t will hit the fan. The thing is nobody knows when, how often, and how serious the error will be. In the vast majority of cases it's just little bugs that can be squashed without the journalists wetting themselves, but occasionally, something will happen that the external system users will notice. Sorry for the tone, but I think someone had to tell things like they are ! |
Microsoft fanboy mantras I have never been accused of that and I'm most certainly not a fan of M$. If you're not capable of making a point without such unwarranted drivel then you're better off remaining silent. |
You're the one who posted a link to overhyped journalistic nonsense.
No further comment. |
From what I've read, it looks like the problem is related to neither Microsoft's operating system nor Linux, but is instead a consequence of extremely poor IT management and design. A poor workman blames his tools.
You can build rock-solid systems on either type of operating system. You can also build garbage on either OS. If the previous system was using C# or .NET, those are already bad signs. A switch to Linux is also a bad sign. Both actions imply that the end user was simply trying to find the absolute cheapest, quickest "solution," without any regard for testing, safety, reliability, recovery, performance, etc. You get what you pay for, and if you don't know how to write specs and/or don't know anything about IT, you usually get even less than you pay for. |
Originally Posted by AnthonyGA
(Post 6271245)
A switch to Linux is also a bad sign. Both actions imply that the end user was simply trying to find the absolute cheapest, quickest "solution," without any regard for testing, safety, reliability, recovery, performance, etc.
|
Why would a switch to Linux imply that you weren't concerned about safety, reliability, performance, etc? The total burden of work and responsibility is going to be roughly the same no matter which operating system you use. Many organizations, when they try to switch to free software, are naïvely trying to get something for nothing. If they simply wanted UNIX, then the obvious choice would be some sort of commercially-supported version of UNIX, which would bring all the technical support and responsibility of a paid vendor with it. The fact that Linux was chosen instead strongly implies that the only motivation was lowering costs, with all other considerations taking a back seat. The catch is that you cannot lower costs that way, all you're really doing is shifting them around (instead of paying money to a third party, you'll be spending it on payroll for your own employees). I wouldn't run any important or safety-critical server software on Windows. On the desktop, only Windows or (in some cases) Mac OSX is appropriate, unless the desktop role is very tightly and deliberately constrained. For servers, in most cases, I'd install UNIX or its immediate relatives. Linux is popular mainly for reasons that are unrelated to technical considerations. I wouldn't put Windows on a server unless it had to support something that runs specifically on Windows, such as Microsoft Exchange Server or Windows domain management. |
"On the desktop, only Windows or (in some cases) Mac OSX is appropriate,......"
??????????????????? I'll inform my successful small business (mostly Linux, 2 Macs) immediately! :ok: Mac |
I do find it strange that such an important system was based on Linux and intel, rather than on 'NIX and high-end mid-range systems from HP, Sun or IBM.
It does suggest a penny-pinching approach or attitude that may have extended into the design, development and testing regimes. As others have said, it's rarely the OS or hardware that's to blame, usually it's the design and implementation that's at fault. SD |
I also find it strange Saab
Although I visited an old site I used to work out not so long ago and found things very much changed. All the Solaris servers were gone to be replaced by... Linux although to be fair it was a commercial flavour. All the desktop sparks were gone to be replaced by linux deskyops and MS laptops. There were loads of terminal servers doing database stuff. The enterprise exchange system which had been the bane of my life at times had been punted and it was back to using sendmail on the nix systems with IMAP. It all looked rather shoddy to me and certainly not what you would have expected in a blue chip companys server room. |
Although I visited an old site I used to work out not so long ago and found things very much changed. All the Solaris servers were gone to be replaced by... Linux although to be fair it was a commercial flavour. All the desktop sparks were gone to be replaced by linux deskyops and MS laptops. There were loads of terminal servers doing database stuff. The enterprise exchange system which had been the bane of my life at times had been punted and it was back to using sendmail on the nix systems with IMAP. It all looked rather shoddy to me and certainly not what you would have expected in a blue chip companys server room. None of the changes you describe has any real technical justification. The use of terminal servers is especially irresponsible, although I've seen it often enough. In the old days, that was called "timesharing," but timesharing worked much better than terminal servers. I'll inform my successful small business (mostly Linux, 2 Macs) immediately! |
"When you are managing 60,000 desktops in 100 countries, however, the rules change."
Indeed. With their record at this sort of scale I'd be worried about using Microsoft. And even considering their massive discounts to big business users it would be expensive - though I would write it off as a business expense and get the ordinary tax-payer to subsidise me (and Microsoft). "You can get away with all sorts of things in a small business, including many unconventional IT policies..." I hardly think Linux per se qualifies as unconventional nowadays - should we all then be restricted to commercial UNIX or Microsoft? Would it be "better" if I was using FreeBSD (which I considered) or are only commercial offerings acceptable? The fact is that just about any modern OS is as good (or bad) is its implementation in a business. Crap sysadmins, slack security and lazy policies will make any system liable to instability, corruption and crashes no matter how much money you have paid for it. Mac |
No offence chaps but you are all stuck in the stone age. I don't know anybody using anything else apart from regular off the shelf Linux (when not using MS) for anything from small, fairly mission critical projects (radio station) to extremely large-scale web applications that serve tens of thousands of users.
|
Originally Posted by Booglebox
(Post 6274056)
No offence chaps but you are all stuck in the stone age. I don't know anybody using anything else apart from regular off the shelf Linux (when not using MS) for anything from small, fairly mission critical projects (radio station) to extremely large-scale web applications that serve tens of thousands of users.
Also, a web app serving 10,000 users is not "extremely large scale". |
Indeed. With their record at this sort of scale I'd be worried about using Microsoft. And even considering their massive discounts to big business users it would be expensive - though I would write it off as a business expense and get the ordinary tax-payer to subsidise me (and Microsoft). I hardly think Linux per se qualifies as unconventional nowadays … On servers, Linux is popular, because (1) it's cheap or free; (2) it has been very heavily hyped, especially by people who have never heard of UNIX; and (3) it looks a bit like UNIX (although UNIX fans will want the real thing, and I don't blame them). Would it be "better" if I was using FreeBSD (which I considered) or are only commercial offerings acceptable? No offence chaps but you are all stuck in the stone age. I don't know anybody using anything else apart from regular off the shelf Linux (when not using MS) for anything from small, fairly mission critical projects (radio station) to extremely large-scale web applications that serve tens of thousands of users. Linux is more popular on servers, for reasons already stated (unfortunately these reasons do not include technical superiority). A system serving ten thousand users is not "large-scale" by my definition, which comes from the world of mainframes. Even my own personal Web site serves thousands of unique visitors a day. A fairly good-sized company might have 40,000-80,000 desktops, a large company may have many more. |
The fact that Linux may be cheap or free does not stop large companies having a business model that allows them to make some extremely good revenue out of the applications that run on that O/S. Go ask IBM about Linux on System z running under z/VM. The proprietary operating systems are often dramatically superior to Linux for a given type of job, too. Mainframe operating systems, for example, are extremely productive for the types of work for which they are designed, far more so than a generic OS like Linux. Even UNIX is a terribly poor choice for mainframes, and if misguided customers didn't insist on it, it wouldn't be in the catalog. |
Been waiting for years for a commercially viable release of OSX that can run legally on any of the stuff that passes for a PC. Now that would be ideal, nice GUI on top, NetBSD underneath. Looks like I will still be waiting for a while though. Anyway, GUIs soak up much of the horsepower of any system equipped with them, irrespective of the OS beneath. Glistening, dancing, transparent 3-D GUIs may win beauty contests, but they are very expensive in terms of resources. Unfortunately, today's Windows is stuck with a GUI, which is one of the drawbacks that make it less suitable than UNIX and its ilk as a server. In some cases up to 80% of the processing horsepower of a system can be consumed by the GUI, so just having one on a server is a waste of money. Not only that, but many administrative tasks are much faster to carry out with a command-line interface than they are with a point-and-click GUI interface. Windows is very tiring to use as a server because it is impossible to avoid using the GUI for many tasks. I'm not sure about the extent to which you can strip the GUI out of Mac OSX, but it's pretty much impossible with Windows today. XP, Vista, 7 … they all come from the NT code base for the most part, but over the years the rock-stable and super-secure NT code base (which is very well written) has been contaminated by imports from Windows 95, which was garbage. The original NT GUI was quite distinct from the kernel, and the system was very secure in consequence. Those days are gone. Both were progressively sacrificed for the sake of users who wanted a more "friendly" and "pretty" interface, which required gutting some of the security features to improve performance (whence DirectX et al.). I was never happy about that, but that's the way it went. The secure Program Manager and Explorer were discarded in favor of the mess from Windows 9x, destabilizing the system. This improved the "user experience" for Windows on the desktop, but put holes in the security for Windows as a server, and made the system more difficult to lock down. It's still more secure than OSX or Linux, though, by orders of magnitude. Apple did the same thing with OSX, bolting on vast amounts of extra code to make it pretty and friendly, and thereby undermining the security and suitability of the core OS in a server or locked-down environment. UNIXoid systems aren't really secure to begin with, but adding a GUI makes them worse. And there are many flavors of Linux that fall into the same trap, only the GUI is more primitive and less functional than that of OSX or Windows (not having billions of dollars' worth of top developers behind it). The fancier the GUI, the less suitable the system is as a server. Other UNIX systems and clones are also doing this, and I don't know why. FreeBSD is run by a great many people with a GUI, which I suppose makes sense on the desktop (although why anyone would run anyBSD on the desktop is a mystery to me), but I run it strictly as a server, with just a simple command-line interface at the console and a few SSH sessions from my desktop, thereby allowing me to run plenty of stuff on a very small machine. Anyway, the industry doesn't seem to want to accept that you cannot be all things to all people, and you cannot be the world's best desktop AND the world's best server. Until it faces this reality, you're going to have people running the wrong OS on the wrong systems, and companies encouraging them in their error. |
Originally Posted by AnthonyGA
(Post 6277403)
Unfortunately, today's Windows is stuck with a GUI, which is one of the drawbacks that make it less suitable than UNIX and its ilk as a server. In some cases up to 80% of the processing horsepower of a system can be consumed by the GUI, so just having one on a server is a waste of money. Not only that, but many administrative tasks are much faster to carry out with a command-line interface than they are with a point-and-click GUI interface. Windows is very tiring to use as a server because it is impossible to avoid using the GUI for many tasks.
I'm not sure about the extent to which you can strip the GUI out of Mac OSX, but it's pretty much impossible with Windows today. |
This improved the "user experience" for Windows on the desktop, but put holes in the security for Windows as a server, and made the system more difficult to lock down. It's still more secure than OSX or Linux, though, by orders of magnitude. And there are many flavors of Linux that fall into the same trap, only the GUI is more primitive and less functional than that of OSX or Windows (not having billions of dollars' worth of top developers behind it). The fancier the GUI, the less suitable the system is as a server. |
:D To be honest its not a server if it has a keyboard and monitor attached.
The number that I used to regularly tenet into I only really knew which country they were in and which IP address to telnet. |
50 Places Linux is Running That You Might Not Expect
Though I believe Munich has caved in to MS pressure.... Ho hum! :ok: |
Windows Server Core: Overview | SerkTools You do know with Linux the GUI is entirely separate from the underlying OS don't you? Sometimes it's even a waste on the desktop. In one of my earlier computers I had a Windows FTP application that never seemed to reach the 10 Mbps speed of the LAN for transfers. I finally discovered that it was spending most of its CPU time painting its window, and when I switched to the simple CLI version of FTP that comes with Windows, transfer rates immediately rose to the capacity of the link. To be honest its not a server if it has a keyboard and monitor attached. Even so, I do SSH into the server from the desktop most of the time, as this is more flexible and makes it easier to have multiple "terminals" connected. Which is why if you read my post I said running 'under z/VM' - in other words as a guest OS in a totally separate LPAR. Up until around the mid 70's when MVS 3.8 existed, it was public domain and able to be installed on any competitors hardware - Amdahl, ITEL. |
In general I woud only ever have a monitor etc attached when the build was being done unless of course I had already built it virtually and it was a squirt job.
Most places have and I forget the system name but you hit the crt button twice and you can get access from a gold fish bowl 14" screen and a bio hazard keyboard. I never used it but have wired it up and then reallocated the fancy new monitor which had been purchased for the server room. Huge screams from the windows boys but as the racking etc came off the nix budget and it used to stop the CEO's PA moaning about doing spreadsheets on a Admin standard monitor they could go and sing. On the subject of GUI's it used to fill me with joy when starting at a site to find the server room full of "pipes" screen savers. You just knew you were in for months of teaching folk that didn't know how to suck eggs how to suck eggs. |
Yes, but separate or not, GUIs consume a great deal of resources and introduce many complications to the system. While this may be justifiable on a desktop, it's a tremendous waste on a server. As you commented SSH in from a desktop or if you really want a GUI on the server run something light on resources & only run it as needed. |
How is that relevant if you don't have the GUI running or don't even install it? I don't think UNIX or Linux systems should ever have default installations that put in any type of GUI. If you are running these operating systems and you don't know how to set up a GUI yourself, you don't know enough to be using these operating systems. I know that this is often done to encourage the use of these operating systems on the desktop, but they are not suitable for the desktop. The obvious exception is OSX, which has UNIX-like underpinnings but has nevertheless been heavily modified to serve more or less exclusively as a desktop (if you remove the GUI from a Mac, well, why bother paying for a Mac?). |
Incorrect. In the 70's OS software was provided FOC when the client purchased the hardware. Since then MVS 3.8J has been readily available under public domain along with all associated software including compilers etc to anybody who wants it. However, I did some research, and it appears that some versions of MVS may have fallen into the public domain under current copyright law (which required, at one time, that copyright notices be placed on copyrighted materials published in the past in order to retain copyright protection). Apparently IBM took no steps to ensure copyright protection of some code (in the days when steps were still necessary) and has not attempted to assert copyright in some cases. The source code for mainframe operating systems historically has been more or less public, to allow customers to modify code and no doubt because proprietary mainframe source code isn't of much use to anyone who doesn't have the corresponding hardware. However, publishing source code is not equivalent to placing something in the public domain. |
Glad that you have satisfied yourself with the fact that IBM have software in the PD. IBM do not make the source code of their OS's for example z/OS available to anybody. For one thing much of it was written in PL/S which is itself only available to staff within Big Blue. |
All times are GMT. The time now is 17:21. |
Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.