Kamis, 30 Juni 2011

Samsung, Acer Chromebooks Available for Preorder

The laptops, which feature Google's Chrome OS, can now be preordered via Best Buy and Amazon, Google says. Chrome OS itself, meanwhile, remains a work in progress.


The laptops, which feature Google's Chrome OS, can now be preordered via Best Buy and Amazon, Google says. Chrome OS itself, meanwhile, remains a work in progress.


View the original article here

Sapphire Radeon HD 6770

Continuing to ensure that Linux benchmarks on the latest AMD Radeon HD graphics processors are available, the kind people at Sapphire have sent over another Radeon HD 6000 series graphics card. After previously reviewing the Sapphire Radeon HD 6570 and Sapphire Radeon HD 6870, up now is the Sapphire Radeon HD 6770. The Radeon HD 6770 (and HD 6750) up until recently was just offered to OEM builders, but now Sapphire has begun selling various products with these graphics processors, which end up being re-branded Radeon HD 5770/5750 "Juniper" graphics processors.


View the original article here

SeaMonkey 2.1 Released

Version 2.1 of the SeaMonkey browser (and more) is out. "Building on the same Mozilla platform as Firefox 4, it delivers the latest developments in web technologies such as HTML5, hardware acceleration and improved JavaScript speed." Also included are cross-device synchronization, "personas," a new unified personal data manager, better plugin...


Version 2.1 of the SeaMonkey browser (and more) is out. "Building on the same Mozilla platform as Firefox 4, it delivers the latest developments in web technologies such as HTML5, hardware acceleration and improved JavaScript speed." Also included are cross-device synchronization, "personas," a new unified personal data manager, better plugin handling, and more; see the release notes for details.


View the original article here

Startup Claims its CMOS Tech Cuts Power Consumption by half

A newly unveiled startup says it has devised a CMOS platform that can cut the active power consumption of & a wide range of integrated circuit products& by half, and leakage power consumption by up to five times. SuVolta says its & PowerShrink& platform has already been licensed by Fujitsu, which will offer it in 65nm products in 2012....


View the original article here

Supercomputing Freakonomics - Finding Meaning Beyond the Headlines

Twice a year, the Top500 Project publishes its list of the fastest supercomputers in the world. In the last announcement, we continue to see Linux dominating the list. This is nothing new since Linux has been dominating since the mid-2000s. In fact, Linux share in supercomputing looks a lot like Microsoft’s historical share of the desktop market. I thought it would be interesting to take a step back and look at the performance capability of these computers as a whole and also how the rise of Linux is mirroring the geographical expansion of supercomputers.


Everybody tends to watch the number of Linux systems on the top500, but there’s a fascinating story being told by the Rmax performance numbers (Rmax is the maximum performance of a computer (measured in Gflop/s) achieved in the HPL benchmark. In many ways, this is a much more enlightening statistic, because it shows us the overall nature of performance on this list, instead of just focusing on individual computers. (This time around, five Linux systems were actually bumped off the bottom of the list, even though Linux’s *total* computing power grew by 38%.)


Linux dominance in the overall compute power of this list isn’t surprising since Linux is used in every one of the top ten computers, and especially when you consider that there is a full order of magnitude difference in performance between the first machine and the twelfth machine, which only increases as you move down the list. The first non-Linux system shows up at number 40.


Supercomputing power was on the rise well before Linux arrived, but when you look historically, it was Linux-powered machines that really caused the big ramp in the mid 2000s. In fact, when you graph the historical Rmax results of the top500 by OS, you can see that not only has supercomputing gone almost entirely to Linux, it’s also been the only OS driving the exponentially rising curve since 2005.


Next let’s look at where this is happening. This time Fujitsu of Japan tops the list. We have also seen players from China and Europe entering the fray. What’s really interesting is when you look at how this is distributed worldwide, and the role that Linux (including Linux machines that are classified as “Mixed,” like BlueGene) plays in making this happen.


It’s not surprising that the list has become very geographically diverse over time. What’s interesting is that this, too, is being driven almost entirely by Linux. In the graph below, all of the colored segments reflect the computing power deployed on “Linux” and “Mixed” machines in countries around the world. Blue is the US, white is Japan, red is China, orange is France, yellow is Germany, and so on. The dark segment on the bottom is all of the computing power deployed worldwide on platforms _other_ than Linux. Notice anything? (Here’s a hint, look at what OS is enabling this national diversity in supercomputing.)


Last, the good news here is that the there is more and more raw computing power being made available on a global basis thanks to Linux - and a lot of this innovation is making its way back into the kernel. As more countries start to use smart grid technology or seek to forecast the effects of global warming there is one common thread – the need for more computing power is endless. Just like what we’ve come to understand about Watson, (other than embarrassing humans at Jeopardy!), this technology will be used in smaller systems as we address one of the more pressing business issues of today - big data.


Once again these numbers are great for Linux. But more than numbers,, it is Linux’s ability to provide access to source code for anyone, to be optimized and have those optimizations returned for the common projects for ever increasing innovation that has created an unbreakable virtuous cycle in computing. I would also like to take this opportunity to congratulate our platinum member Fujitsu, who has done impressive research and development on Linux in super computers and for the enterprise, and who in this announcement has taken the number one position.


View the original article here

Task Management from the CLI to Android with Todo.txt

In the never-ending quest for task-management software, most Linux users gravitate towards fancy GUIs, multi-faceted import/export, and endless customization. Sadly, all that achieves is a cornucopia of incompatible GUI to-do managers, and when one goes dormant, the others are no help. But if you take the "less is more" approach instead, you'll find todo.txt a worthy alternative. Todo.txt stores its data in a flat text file that any application can read, but it gives you full access from the command line as well as from Android and other mobile devices.


Technically speaking, the command line interface is properly called either todo.txt-cli (which is the name of its public Github project) or todo.sh (which is the name of the actual script you use to add, update, prioritize, and assign tasks). But for most people, todo.txt is the preferred moniker, because it highlights the key feature — the plain-text, human-readable, and "future-proof" todo file. The CLI script is the canonical way to access and change the file, but its simple format had led to a healthy ecosystem of other front-ends, including web-based and Android interfaces.


The principle of todo.txt evolves directly from a simple text file listing one task per line, with a minimum of syntax added on. In fact, according to the wiki documentation, there are just three features that todo.txt views as essential: prioritization, projects, and contexts. Prioritization simply allows you to sort tasks by importance, and is denoted with parentheses: (A) is a "most important" task, (B) is second-most important, and so on. Each "project" is a grouping of related tasks, and a project name is indicated with a plus sign, such as +RoofRepair, +Thesis, or +WorldDomination. Contexts are directly borrowed from the Getting Things Done (GTD) approach popularized by David Allen, and are denoted with an at sign, such as @home, @phone, or @undergroundlair.


The kicker is that none of these syntactic features are required. A perfectly valid entry in todo.txt is as simple as:

Buy root beer

... but you can layer on the other features as desired, and with the exception of priorities, in whatever order feel natural. All of the following are valid entries as well:

(A) Buy root beerSolve Rubik's Cube +SelfImprovement @work(D) +Music +Backyard Install Outdoor Speakers @homeFind lost wallet @home @office @car @park +Life +Finances +RememberingThings

Priorities have to come first, for parseablity's sake — that way if you need to add the task "Turn my side business into a registered 501(C)(3)," it's unambiguous. Some of the front-ends also add creation dates, and if you use them, they either have to come first or directly after a priority indicator, using the YYYY-MM-DD format. This also prevents ambiguity.


Your task list can also include completed items — for various reasons, you may need to retain them in the file even after you are finished with them. The official syntax for a completed item is that it starts with an X (or a lowercase x) followed by a space. Which means if "X-ray" or "X-Men" happen to start one of your to-do items, be sure not to forget the hyphen. If a completed item has a date immediately following the initial X-and-space, this is officially interpreted as the completion date. That could result in some ambiguity in tasks that already started with a date, depending on the front-end you use, so several of the add-ons use their own metadata format such as due:YYYY-MM-DD. But you don't need to worry about that to get started.


The file format, therefore, is simple. Using it is also simple, thanks to the todo.sh shell script. The downloads page hosts bundles in .tar.gz and .zip archive formats, which hints that the package is usable on a variety of operating systems. The latest release is version 2.7, from August 2010. Inside the archive you'll find two files: todo.sh (the script) and todo.cfg (its configuration file).


The todo.sh script is written for Bash, the default shell on most Linux distros. But when it says Bash, it means Bash: not csh, sh, ksh, or that shell your lab partner wrote. Quite a few users have posted questions on the mailing list about mysterious errors that trace back to not having Bash installed, and the system falling back on a slightly-incompatible shell.


You can install todo.sh anywhere, just make sure you make it executable with chmod +x todo.sh. Open the todo.cfg configuration file in a text editor, and edit the "Your todo.txt directory" line to reflect where you want to keep your main todo.txt file, archives, and various ancillary files. If you're not sure, try /home/yourusername/.todo — but also make sure to create the directory you specify. You can dig into the config file some more and play with color settings and list-sorting preferences, but that is optional.


The wiki suggests a few other tips, such as adding a Bash alias to save on typing. Adding alias t='todo.sh -d /home/yourusername/todo.cfg' to your .bashrc lets you access todo.sh just by typing t. The -d switch is recommended because todo.sh supports multi-user use in a system-wide installation, and each user needs a separate config file.


Speaking of usage, to add a task to your to-do list, type t add "Write the Great American Novel" — optionally including any +Projects or @contexts as needed. The only rule is that you must place quotes around your task text. To see your list of outstanding tasks, type t list or t ls. Todo.sh will write a numbered list of your tasks (along with a summary) to the screen. The number denotes the order the tasks were added. For example, you might see

01 Write the Great American Novel02 Quit job03 Move to @TropicalIsland--TODO: 3 tasks in /home/nate/.todo/todo.txt

If you add a project, context, or priority as an optional filter to the end of a list command, you can see just those tasks that match. So t ls @home shows you your "home" context, t ls +ServerUpgrade your "ServerUpgrade" project, and so on. But you can also pass any chunk of arbitrary text to an todo.sh ls command, and the script will return those to-do items that match it, so t ls cupcakes is perfectly valid, too.


You assign priority flags to tasks with the pri command, followed by the task number (as ls lists it) and the letter of the priority you want to attach (A through Z), such as t pri 16 A. Subsequently, when you do an ls listing, tasks that have priorities assigned will be color-coded and placed at the top of the list, sorted in order. The depri command removes the priority flag from an item.


When you complete a task, mark it with do followed by the task number. In other words, t do 8. Task 8 will be flagged with an "x " in the todo.txt file, and no longer listed for you when you type an ls command.


On top of the basics described here, the todo.txt community has written several add-ons that supply additional functionality, such as grouping ls output by project, assigning "threshold dates" in the future (so that items do not appear in the list until they are slated to start), and pretty PostScript output for printing. These add-ons are scripts that live in their own directory, which is specified in todo.cfg.


If you never leave home, a GTD-capable task manager that you can use entirely from the terminal is reason enough to look at todo.txt. In fact, because you can use todo.sh over SSH, maybe you never do need to leave home. For that matter, todo.sh runs perfectly well on Maemo handhelds (if you remember to install Bash) and should work on MeeGo and Android devices, too. But for the sake of argument, let's pretend there are times when the command-line orientation of todo.sh isn't what you want. There are several alternatives to keep your tasks in sync while you're on the move.


The first is Todo.txt Touch, a full-featured Android client written by the todo.sh maintainers. You can download it from Github, or look for it in the Android Market (although it is not free in the market, the code is exactly the same as the freely-downloadable version). It sports on-screen buttons for the basic functions (adding, filtering, and flagging tasks, assigning priorities, contexts, and projects, etc.) and uses Android's built-in text-input mechanisms to enter task details. There is a simple one-click interface to access your list of projects and contexts, which really is nice (even if it is flashier than the shell script's text output....).


More importantly, Todo.txt Touch is designed to sync with a remote storage location. As of right now, the only supported service is Dropbox, but the team indicates that more "cloud" storage services are to come. If you want to keep your Todo.txt Touch and your CLI task lists synchronized, all you need to do is open up your todo.cfg file and point the todo.txt directory variable to the proper location in your Dropbox folder.


But wait, there is more. Those users with iPhones will appreciate todo.txt-web, which is a PHP-and-jQuery based front-end to todo.sh that you can use as a web app. This requires making the todo.txt files writable by your web server, but is otherwise based on the same code. The default CSS styling of the front-end is designed to look like an iPhone web-app, but the code should render equally well in any modern mobile browser.


Finally, for those users that use the Remember The Milk web service, there is a separate Bash script called getmilk.sh that fetches a remote list of RTM tasks and re-formats them into todo.txt-compatible syntax.


Any way you slice it, Todo.txt's power comes directly from the simplicity of its file syntax. You can use the shell script, the Android app, a text editor, or any other tools you devise to edit the file (there is even one written in .Net for Windows users). As long as the applications retain the correct syntax, any of the other front-ends can make use of the result, so you can Get Things Done whether you're in front of the keyboard, on a remote machine, or in the middle of nowhere with only your phone by your side.


View the original article here

Rabu, 29 Juni 2011

Technical Preview of Mageia ARM Port

The Mageia project has published a first preview of a Mageia port for ARM processors. The port, code-named "arm eabi", includes several development tools, basic network services and a full GNOME desktop environment - a minimal version of KDE is also included


View the original article here

The 2011 Linux Distro Scorecard

Picking a Linux distribution isn't always easy. It's one of the most common hurdles for new and aspiring Linux users, deciding which distribution is going to be right for them. With so many to choose from, how do you pick the right one? Let's start with an overview of the major Linux distros, and you'll be ready to jump in right away.

You can find hundreds of Linux distributions, depending on what your needs are. For this scorecard, we're focusing on desktop distributions that are fairly popular, well-supported, and have a reliable release history, and strong community. In last year's scorecard, we started with seven distros — this year, we've narrowed the field to six distributions:

DebianFedoraLinux MintopenSUSESlackwareUbuntu

This isn't to say that a distribution isn't the bee's knees if it's not on the list — but we want to start with a manageable selection for new users. If you want to start at the easy end of the spectrum, we've got good choices for you — and if you want to get your hands dirty and learn all about Linux, we've got a few distros that meet those needs as well.

Which distribution is the best? None of them, or all of them. It's really about what meets your needs. Some people want a distribution that's really easy to use, and don't care much about licensing. Some people choose a distribution because of the licensing, and ease of use isn't really that important. You might only want to look at distributions that have KDE or GNOME as a desktop. It's sort of like picking a restaurant, what makes one person happy is going to be a really bad experience for another person. I like spicy food, other folks can't handle it or just don't like it. What we're doing here is letting you have a peek at what's on the menu so you can decide where you'd like to start.

As with last year's scorecard, the criteria for choosing distributions were the major Linux desktop distributions that have demonstrated longevity, a strong community, and stability. Naturally, that means the majority of Linux distributions aren't listed here, so if your favorite didn't make the cut — don't take it personally. Do feel free to talk about your favorites in the comments, and offer other helpful suggestions for new Linux users.

To start with, let's look at Debian. Debian is an entirely community developed Linux distribution with no single commercial backer. Many companies contribute to Debian in one way or another, but it's a purely independent project. Debian has a large developer community, and is used as the base for Ubuntu, Linux Mint, and a number of other distributions. The distribution started in 1993, founded by Ian Murdock — but out of humble beginnings, it's grown enormously.

Debian has a very developer-centric community, though the project has recently welcomed non-packaging contributors to explicitly acknowledge contributors who write documentation, create artwork, perform translations, and so on. Debian has a Social Contract that requires the project remain free, give back to the larger community, be open with problems, and to be guided by the needs of its users and the free software community.

Debian has an intense focus on technical excellence and shipping free software. With the most recent release, the Debian project rid its Linux kernel of all non-free firmware ("binary blobs"), though the project does continue to offer kernels with the firmware in the non-free repos. Debian does allow some non-free repositories, but they're not "officially" part of Debian.

Debian's release schedule is "when it's ready," and not before. The distribution ships at irregular intervals, though users don't have to wait for stable releases to use the latest and greatest. Many Debian tend to run the testing or unstable branches. Testing (which will be the next stable release) and unstable have more current, sometimes bleeding-edge software — but are also for more experienced and adventurous users. You probably shouldn't run Sid (unstable) unless you have a thirst for adventure and want to get some experience troubleshooting. This isn't to say it breaks a lot, but when it does, it could be spectacular.

Debian stays close to upstream with its packages, and offers a minimum of customization and polishing. Compared to distributions like Ubuntu or openSUSE, Debian makes very few changes. This isn't to say no changes, but there's minimal rebranding and such for Debian compared to some of the major distros that are trying to appeal to a less experienced audience.

You'll get very little hand-holding with Debian. The installer is fairly complex when compared to other distros, and you will have to do much more configuration manually. Users need to make more decisions about which packages to install initially, will find fewer management tools, etc.

Debian is a distro of choice for users who want to install Linux on an older non-Intel machine. You can run the most recent stable release on x86, AMD64, ARM, PowerPC, Itanium, MIPS, SPARC, and IBM's S/390. Note that Debian dropped support for PA-RISC and Alpha chips with the Debian 6.0 ("squeeze") release.

Debian is also unique in our list because the project now offers a FreeBSD-based release as well, so if you want the Debian userland software with a BSD kernel, you can give it a shot. Not only does Debian support an enormous range of hardware, it also has an extremely large package selection. The packages in the stable release are likely to be a bit behind the upstream's most recent release, but if you want to track new software you might want to run Debian testing or unstable.

Debian is an open project, but it doesn't have as many resources to induct new contributors as Fedora, openSUSE, or Ubuntu. Overall, Debian is best-suited for more experienced Linux users or those who want to learn more about their systems. It's also an ideal distribution for those who are dedicated to the ideals of free software. If you want a distribution that "just works," you probably won't enjoy Debian as much. But without Debian, many of the "just works" distros would not exist.

The Fedora Project is sponsored by Red Hat, but has a fairly diverse set of contributors outside the company as well. The project has its focus on innovation, freedom, and community contributions.

Fedora has a six-month release cycle, but releases often slip if they're not up to quality standards — almost every release cycle has a few slips. But the release dates tend to be close enough to the schedule that users have a fairly good idea when the next release is going to be out. Tracking release dates can be important — the releases are only supported for about 13 months. Users who don't want to upgrade frequently should choose another distribution. But if you want to ride the "cutting edge," of software, Fedora is going to be an excellent choice. Fedora ships the latest software that's stable, or (in some cases) almost stable. New technologies often debut in Fedora.

Fedora is fairly user friendly, but can have a few rough edges. It's not always as polished, and sometimes Fedora ships software that's brand-new — like with the Fedora 15 release that ships with GNOME 3.0, a new init system, and more. The next release is expected to default to the Btrfs filesystem, another technology that's not been widely deployed. Part of shipping "cutting edge" software means that you may encounter some packages that are less than 100% stable, or may not be feature complete. It is worth noting that the quality of the distribution has improved greatly since the early days of Fedora. If you're comfortable with computers and not afraid of the command line, Fedora is a good distro to consider.

If software licensing is important to you, Fedora is one of the top distributions to look at. The project only ships free software, and won't ship media codecs or much else that's not open source or might be legally encumbered. You may have to do some extra work to get MP3 or DVD support, but that's part of the price of freedom.

Fedora takes software freedom very seriously, and makes its tools and infrastructure free as well. If you want to set up a Fedora derivative, it's not hard to do. The project supports a number of spins (Fedora-based distros that differ from the default set of software), and has the tools for users to create their own. Whether you like GNOME, KDE, Xfce, LXDE, or another desktop, you're good to go.

The management tools and installer are fairly good, though they assume some understanding of Linux. The management tools aren't quite as comprehensive as openSUSE's YaST, but you'll be able to do most system administration using GUI tools if you choose to. You'll also find a fair assortment of third-party packages and support for Fedora, including hosting providers that offer Fedora as an option if you want to extend your Fedora use to a hosted server.

The hardware support is more limited than Debian, though — so no Itanium or MIPS for you. If you have x86 or AMD64 based systems, though, you're good.

Generally, Fedora is OK for new users, but might not be the best introductory Linux distribution. It's great for experimenting with new technologies, and to see what's coming in the future for Red Hat Enterprise Linux (RHEL). If you're a developer, Fedora is also a great choice. You can get involved in the Fedora community very easily, no matter what your skill set. The community is friendly and works hard on recruitment for new contributors.

Linux Mint has undergone a lot of change in the last year. Historically, Mint has been based on Ubuntu (which is in turn based on Debian, of course). In September, the Mint folks introduced a Debian-based release in addition to the Ubuntu-based main release. What does that mean? Users who want to get the most polished and stable release should choose the main Mint release, which is based on Ubuntu. Users who want to use a "rolling release" distribution should look at the Debian version.

Whether you go with LMDE (the Debian version) or the usual Mint release, you'll get an easy to use installer, slick package management tools, and out of the box support for MP3s, Flash, DVDs, etc.

Many of Mint's packages come from Ubuntu, but the project does customize or provide its own packages for some software. You'll also note that the latest Mint release (Mint 11) does not share Ubuntu's default desktop — instead, Mint 11 sticks with GNOME 2.32 and is taking a more conservative approach to its desktop. Because of its Ubuntu heritage, Mint has decent third-party support. You'll be able to install packages for Ubuntu on Mint most of the time with no problem.

The Ubuntu-based release also follows Ubuntu's development cycle, but trails by a few months. So when Ubuntu 11.10 is released in October, for instance, you'll see a final release of Mint 12 a few weeks afterwards. Support, likewise, follows Ubuntu's schedule. You get 18 months of support for regular releases, and three years on the desktop for Long Term Support (LTS) releases. The LTS schedule is determined by Ubuntu, of course &mash; but there tends to be an LTS release about every two years. These are strongly recommended for folks who want to install Linux for friends and want to have a hassle-free support scenario.

The Debian-based release is a rolling release, which means that there are fewer releases but you should be able to track LMDE by installing just once. If you're unsure which release to choose, go with the the standard release. Note that you can also grab a release that doesn't include the multimedia support, if you're worried about running afoul of the law with patent-encumbered codecs and such.

Mint has a friendly community, though contributing to Mint is not as easy as other distributions. The core team is small, and there's not a major focus on contributing. However, the Mint folks say they're willing to take contributions and dedicated contributors have launched Mint flavors based on KDE and LXDE.

Mint is x86 and AMD64 only — no support is forthcoming for PowerPC, SPARC, etc.

The bottom line on Mint? It's a great distro to start with if you want a replacement for Windows and want a distribution that "just works" right after the install.




View the original article here

The Linux 3.0 Kernel With EXT4 & Btrfs

With the Linux 3.0 kernel carrying CleanCache support along with various improvements to the EXT4 and Btrfs file-system modules, it is time for another Phoronix file-system comparison. This time around the EXT4 vs. Btrfs performance is particularly important with Fedora 16 possibly switching to Btrfs by default. Due to this level of interest, for our Linux 3.0 kernel benchmarks of the EXT4 and Btrfs file-systems, an Intel SSD was tested as well as an old 5400RPM IDE notebook hard drive to represent two ends of the spectrum.


View the original article here

The Linux Kernel Power Problems On Older Desktop Hardware

As mentioned last week, a plethora of Linux power tests are on the way now that we have found an AC power meter with USB interface that works under Linux and we've been able to integrate nicely into the Phoronix Test Suite and its sensor monitoring framework. In this article is one of the first tests that have been completed using this power-measuring device as we monitored the Linux kernel power consumption for an old Intel Pentium 4 and ATI Radeon 9200 system for the past several kernel releases. Even this very old desktop system looks to be affected by the kernel power problems.


View the original article here

The Power of Open from Creative Commons

The Creative Commons project has announced the release of a 47-page book highlighting the stories of a number of people using CC licenses for their work. "The Power of Open collects the stories of those creators. Some are like ProPublica, a Pulitzer Prize-winning investigative news organization that uses CC while partnering with the world's largest media companies. Others like nomadic filmmaker Vincent Moon use CC licensing as an essential element of a lifestyle of openness in pursuit of creativity. The breadth of uses is as great as the creativity of the individuals and organizations choosing to open their content, art and ideas to the rest of the world." Unsurprisingly, it's downloadable under a CC license.


View the original article here

Tuxera Claims NTFS Is The Fastest File-System For Linux

Coincidently there's some more file-system news after just writing about the EXT4 and Btrfs file-systems with the Linux 3.0 kernel. A Phoronix reader has pointed out that a developer at Tuxera is claiming their proprietary NTFS Linux kernel driver makes the Microsoft file-system the fastest choice under Linux. Reportedly this kernel driver that implements Microsoft NTFS support is about twice as fast as EXT4, the main Linux file-system of choice right now...


View the original article here

Selasa, 28 Juni 2011

VIA OpenChrome KMS Support Is Nearly Done

James Simmons has written a status update to the OpenChrome development list concerning his ongoing work towards enabling kernel mode-setting (KMS) support for VIA hardware with this community-maintained VIA Linux project...


View the original article here

Virtualization Industry Shakeup

Simon Crosby, previously the CTO of Citrix System announced today that himself, Ian Pratt, previously VP of Engineering of Citrix Systems and Chairman of Xen.org, and Gaurav Banga, creator of Phoenix Hyperspace are launching a new startup, Bromium, Inc. Read more


View the original article here

Weekend Project: Setting up DNS Service Discovery

DNS Service Discovery (DNS-SD) is a component of Zeroconf networking, which allows servers and clients on an IP network to exchange their location and access details around the LAN without requiring any central configuration. Most Linux distributions supply the Avahi library for Zeroconf support, but not nearly as many users take advantage of it. Let's look at an easy-to-set-up use for DNS-SD: providing automatic bookmarks to services. All it takes is an Apache module and a Firefox extension.


The essence of DNS-SD is that Zeroconf-supporting applications or hardware devices broadcast a DNS SRV record (of the kind typically used in static DNS to point to a host and port number combo) advertising themselves, and everyone else on the network hears it and takes note. They make the broadcast over multicast-DNS (mDNS), which is a protocol derived from normal DNS, but using special, local-only "multicast" addresses and the reserved .local pseudo-domain.


The system is akin to Universal Plug-and-Play (UPnP), except that it handles more types of services, and builds more directly on top of DNS. The mDNS/DNS-SD pair's major backer is Apple, and UPnP's is Microsoft, so as you might guess, neither is likely to give up and start supporting the other. There is hope for a unifying IETF protocol in the future, but at the moment mDNS/DNS-SD is well-supported enough by the open source Avahi library that Linux users can start working with it today.


In the Apple world, printers and chat clients commonly use mDNS to advertise their availability. But there is a long list of application types that can advertise over the system, including VoIP clients and servers (such as Asterisk), closed-circuit video devices, even collaborative editors (such as Gobby). Essentially, any service that can be described in a SRV record can be advertised; it just needs to provide a service name, a transport protocol (TCP or UDP), and the port and hostname of the server where it can be reached. The mDNS .local domain allows participating devices to assign themselves reachable hostnames.


With the server properly configured, the DNS-SD stack on any client machines will catch and catalog the local services automatically, for use by applications on the system. On a Linux box, Avahi hears and notes the mDNS messages, and an interested client (say, a chat app) asks Avahi if there are any XMPP servers nearby to talk to. The connection is made, and voilà, you start chatting.


It's easy to imagine how DNS-SD could take the pain out of some typically hard-to-configure applications like VoIP, but if you are new to DNS-SD there are simpler places to start, such as with good old-fashioned HTTP web servers. If you're like me, your main Linux box is running a variety of web interfaces for local services: phpMyAdmin, CUPS administration, Webmin or another config tool (in my case, I also have an X10 home automation front-end and the MythWeb MythTV interface running). You may also have work-related services running, such as a Bugzilla instance or network administration workspace, or even a straightforward Intranet site.


The unifying principle is that these are all web services you might like to access from more than one machine on the LAN. You can manually enter the bookmarks on every machine, or use a synchronization tool like Firefox Sync or XMarks, but these strategies make you choose between repetitive work and potential security risks -- not to mention they require updating all of the client machines whenever there is a change. That is precisely the problem Zeroconf networking was designed to solve.


Developer Andrew Tunnell-Jones has written a small but highly useful extension that adds DNS-SD support to Firefox. The code is hosted at Github, but you can install the extension itself, "DNSSD for Firefox," through the addons.mozilla.org site. It requires Firefox 4.0 or later (no word yet on the just-released Firefox 5; it doesn't appear that anything relevant has changed in Firefox itself, but the add-ons system is notoriously pedantic about version numbers), and a working Zeroconf implementation. For Linux, Avahi works just fine, and Mac OS X users will already have Apple's Bonjour installed. Windows users can install the Apple-provided Bonjour-for-Windows package, which Tunnell-Jones links to from the extension page.


After you restart Firefox, the extension adds a menu labeled DNSSD to the Navigation toolbar (between the forward/back buttons and the URL bar) and to the Bookmarks menu. Click on it, and you will see a list of all of the local HTTP servers detected by your Avahi or Bonjour service: no configuration necessary. If you want to double-check the extension's list, you can run avahi-browse --all from the command line.


If you are running GNOME, you should also see a desktop notification pop up whenever the extension notices a new service (although for most services, this will just be at start-up time). That option is configurable in the preferences, which you can get to through the Add-ons Manager.


It works, and it is automatic, but there are a few quirks to be aware of. First, you don't (yet) have the option to choose where the DNSSD menu is displayed. Placing it in the navigation toolbar makes sense because that is the one toolbar almost guaranteed to be present, but putting it in the Bookmarks toolbar would seem to make more sense to me -- it seems like a natural complement to Firefox's automatic "Most Visited" and "Recently Added" bookmark folders. Second, although you can access the DNSSD menu through the Bookmarks menu, you cannot move it around in your bookmarks to a more convenient location. I asked Tunnell-Jones about both of these options, however, and it sounds like they are possibilities for future releases.


You can probably think of a handful of local web services you would like to automatically advertise around your office or home network, but the odds are that most of them do not advertise over mDNS out-of-the-box. In my case, the only running server that did provide a web interface over DNS-SD was the MT-DAAP audio server. To get your other services to announce themselves, you'll need mod_dnssd.


Mod_dnssd is an Apache module that adds simple mDNS/DNS-SD support to your Apache-hosted sites, with a minimum of configuration fuss. The latest release is 0.6, which supports Apache 2.2, although there are older releases for those still running Apache 2.0 for some reason.


The author, Lennart Poettering, is best know as the maintainer of PulseAudio (which, yes, uses mDNS/DNS-SD to locate other networked PulseAudio sources on the LAN). The docs on the site are a nice introduction, but Poettering has written a more extensive how-to on his blog. To get it working, you'll need to install the module (packages are available on the site, but most distributions offer it as well), and make sure that Apache loads it at startup (check your distro's documentation for details, or edit your /etc/apache2/apache2.conf if installing from source).


To use the module, you must first activate it by placing the DNSSDEnable on directive in the Global Environment section of apache2.conf. With that configuration alone, Apache will advertise all of the VirtualHosts over mDNS/DNS-SD -- however, clients will have trouble connecting to them if you do not label your VirtualHosts with fully-qualified domain names.


For a little more fine-grained control, you can add a DNSSDServiceName "Whatever You Want To Advertise It As" directive to each VirtualHost or Location block. The ServiceName you assign will be the user-visible label seen in the DNSSD menu offered by the Firefox extension, so you can give easy-to-remember, LAN-wide labels to your bug tracker, Apt-CacherNG control panel, or any other site. But remember to include the server's name if you are running multiple web servers on the LAN, lest your users get confused.


By default, mod_dnssd advertises Apache resources as HTTP services (i.e., using the _http._tcp SRV record). That makes sense for most web services, but you can also alter it to properly advertise other applications, such as WebDAV or RSS feeds. Simply add the DNSSDServiceTypes directive to your Apache configuration, followed by a space-separated list of the service types you wish to advertise -- either for the server as a whole, the VirtualHost, or the Location, depending on where you put the directive.


Obviously, the Apache plus Firefox combination only scratches the surface of what DNS-SD as a whole is capable of, but as an increasing number of services use HTTP, it is at least a useful place to start. You can save yourself some trouble by DNS-SD-enabling your Intranet sites and letting your users find them automatically. Of course, you do still need to take precautions to protect your services. The DNS-SD services will only be visible to LAN clients, but if you do not want that to include WiFi visitors, you should partition them off into a different subnet altogether -- and it goes without saying that your admin panels ought to be password-protected.


But there is another subtle condition imposed by this scheme: it requires changing Firefox, the client application. So although it's easy to imagine DNS-SD-advertised bookmarks being useful in a public environment (say, an Internet cafe), you still cannot expect visitors wandering in off the street to have the right extension installed. In my mind, DNS-SD, like Microformats, is a technology that Firefox really ought to support off-the-shelf. There is no reason not to, unless you make the security argument -- but honestly, a service at risk is not any better-protected just because it is un-advertised. Maybe Tunnell-Jones's extension will have a hand in raising awareness of the convenience offered by DNS-SD. At the very least, you can leverage that convenience yourself, and that's a pretty good start.


View the original article here

Weekend Project: Use Rapid Photo Downloader for Photo Management

Often the most impressive thing about the open source community is getting to watch someone step up to work on a task that's far from fun or glamorous in the traditional sense. A good example of that phenomenon is Damon Lynch's Rapid Photo Downloader (RPD), a utility that takes the pain out of what many projects overlook: getting your newly shot material off of the camera's memory card and into your computer. Despite the name, RPD handles both photos and video, and although its speed is impressive, how it really makes your life easier is by keeping you and your storage organized.


The root "problem" that RPD sets out to solve is what happens when you plug your digital camera or camera memory card into your Linux computer. The default behavior is for a desktop tool to pop up, usually a GNOME or KDE front-end to gPhoto2, the standard image-transfer library. The tool knows where in the flash memory hierarchy to find the actual image files (including the separate storage schemes for raw and JPEG images), and it copies them to the hard disk and optionally erases them from the card.


That sounds simple enough, but more often than not, Linux distributions don't use a file-manager tool to handle the task. Instead they hand it off to a full-featured photo management app, which takes considerably more time to step through the process, often filing images in its own internal database, making thumbnails, and other bookkeeping jobs. While that might be a tolerable overhead for a 1 or 2GB card, when you hit 16 or 32GB, it becomes agonizingly slow. If you're on notebook battery, it is even worse.


Compounding the problem, no two photo-management apps can agree on where to store things: ~/Photos, ~/Pictures, ~/MyPhotos, etc. Even if you're a good Linux citizen, every few releases the distros change their default app set, and you eventually find your photos scattered around a half-dozen locations. That makes images hard to find, easy to lose, and easy to forget when you're migrating computers or making backups.


You can download RPD from the project's Web site. The team has detailed instructions provided for Ubuntu, Fedora, Debian, Mandriva, and several other distributions. In most cases, there is either a prepackaged version of RPD available from the distro, or a simple-to-add repository. The latest release is version 0.4.1, a minor update from May of this year.


After you've installed RPD, you can launch it from the Applications menu or from a terminal with rapid-photo-downloader &. RPD is built on GTK+ and some GNOME underpinnings, but it runs just as well in KDE and other environments, since the only key dependency is Python. How you get RPD to actually launch automatically when you attach a camera or pop in a memory card varies, though. In GNOME, go to the "Removable Drives and Media" option in Preferences, and enter rapid-photo-downloader as the command to import photos when a device is connected. In KDE, the setting is found in "System Settings" -> "Device Options."


With RPD, you are in control of all of the options. The preferences dialog lets you choose any folders on your system as target directories (including separate settings for photos and videos), and optionally allows you to auto-create sub-folders to further structure your archives. RPD can offload media from multiple devices at once, so you can get the maximum performance our of your system by, for example, connecting your camera via USB cable while popping another memory card into the card reader slot.


In its simplest form, the sub-folder feature lets you split up your collection in sub-folders by year and month, which makes backing up to non-volatile media considerably simpler. But you can also configure sub-folders to split up images by their filetype (e.g., keeping raw and JPEGs separate), by automatically-gathered metadata (such as camera model), or by "job codes" that you define yourself ("work" and "personal" come immediately to mind).


A related feature is RPD's auto-file-renamer. If you have a naming scheme in mind, RPD lets you set up a renaming formula in "set it and forget it" mode, using the same options available as sub-folder choices. A big benefit to this technique is keeping more than one image source straight. Not every camera allows you to control the file name and file-numbering settings it applies to new material, and these days almost everyone has one "real" camera plus a cameraphone. RPD can massage the less-flexible filenames into the same format as the rest.


The RPD project makes a big deal out of its speed. On top of the name itself, the home page boasts of a 2.5x speedup over F-Spot and a 12.5x speedup over Shotwell. I don't think those numbers are scientific, but RPD is certainly faster. Even more so when you factor in offloading from two cards at once and the instantaneous file-rename feature, which saves an entire step.


Speaking of saving steps, in my mind one of RPD's signature features is the ability to automatically save backup copies of your media. As a member of the non-elite club of those who have had a laptop drive fail while on the road, I always carry a 2.5" external drive with me to save a second copy of any files I shoot (the first being on my laptop). RPD will write backup copies of each file to a directory you specify, and can even do so at the same time it saves the main copy. This assumes your bus bandwidth can keep up; if you are also doing multi-device offloading, you would probably be better off serializing some of your tasks...


As with the main file operations, you can specify separate settings for photos and videos, and RPD will automatically detect all removable storage media, so you can save backups in more than one place if you desire. RPD also allows you to configure some automated behavior, such as starting the offload (and backup) process as soon as the program launches, unmounting removable devices when offloads are completed, and so forth. A handy feature you are likely to only appreciate on rare occasions is how RPD handles name collisions — such as when two auto-renamed files somehow would end up with the same filename. Finally, RPD can automatically sanitize filenames and folder names to ensure compatibility with lesser, non-free operating systems.


For 90% of RPD's users, the basic offload and backup features are more than enough — well, assuming that they use backups, but then again, that is part of the point. RPD builds that easy-to-procrastinate task into the plug-and-play, normal operation. High-end users are more likely to appreciate the flexibility of subtle things like automatic file renaming and directory creation. After all, if you only take a few pictures or videos a year, opening up your Photos folder in Nautilus or Dolphin and browsing by thumbnails might be all the asset management you need. It's with hundreds that that method breaks down.


I'm a big proponent of the "let each tool focus on its own thing" approach. As such, I don't like it when a photo editor decides it knows better than I do where I want my images stored or how I want them organized. Ubuntu frequently changes which iPhoto clone is the preferred photo-management app, and honestly I don't find any of them to be particularly good, or particularly good at guessing what I want. It works far better to have a dedicated image offloader like RPD launch when I plug in a memory card. Thus, regardless of whether I choose to fire up Digikam, GIMP, Rawstudio, or a Flickr exporter, I know that my collection is in the same place. Oh, and I know that there's a backup copy on a removable drive, too — even if I forgot about it when I sat down at the keyboard.


View the original article here

Will Linus Like Your Video?

The Linux Foundation Video Contest this year is different. For the first time, we're running it mid-year and more importantly, it is one of the ways we're celebrating the 20th Anniversary of Linux. Here are five more reasons to submit a video this year! Deadline is July 2, 2011.


1) Linus. Linux creator Linus Torvalds is choosing the ultimate winner this year. No one has a better sense of humor, or competitive streak, than Linus. Winning this year's contest will get you some real community cred. Don't be intimidated. No flames.


2) Planes, trains and automobiles. You get a free trip, people. And, you get to choose from four different destinations depending on which event you choose to attend as the winner: LA Film Festival (Los Angeles), SXSW (Austin, Texas), LinuxCon North America (Vancouver, B.C.) or LinuxCon Europe (Prague).


3) You'll go down in history. This year's winning video will become *the* 20th Anniversary of Linux video. It will be hosted here on the widely visited Linux.com website and The Linux Foundation's YouTube Channel, as well as promoted to hundreds of thousands of people on The Linux Foundation's social channels.


4) Get skillz. If the only thing you've ever shot is your kid eating applesauce but you had a great time doing it...winning this contest could open up a new outlet for your creativity or even get you noticed and open up a new career path.


5) Make a difference, inspire. Linux invokes passionate emotion from the people who use it and who are involved in its development. Linux is the largest collaborative development project in the history of computing. It set the expectation that software, hardware and much more could be built collectively. Your video could plant the seed for the next inspiration to come from Linux.


View the original article here

Senin, 27 Juni 2011

WordPress 3.2 Approaches with RC2 Release

The second and final RC for version 3.2 of the open source blogging and publishing platform, which addresses several bugs found in the previous release and "tweaks" the new default theme, has been made available for testing


View the original article here

XFS Is Becoming Leaner While Btrfs & EXT4 Gain Weight

Red Hat's Eric Sandeen has written an interesting blog post concerning the size of popular Linux file-systems and their kernel modules. It turns out that the XFS file-system is losing lines of code, while maintaining the same feature-set and robustness, but the EXT4 and Btrfs file-systems continue to have a net increase in lines of code...


View the original article here

Sabtu, 25 Juni 2011

Any Linux News From The E3 2011 Gaming Expo?

E3, the Electronic Entertainment Expo, is officially kicking off today in Los Angeles and will be running through Thursday. This, along with the Game Developers Conference, is one of the key times of the year for the electronic gaming industry. A number of game studios will be announcing new titles and other great announcements, but will there be anything Linux related?..


View the original article here

Apache Traffic Server 3.0.0 Goes 64 bit

New release sees a major boost in handling small objects at over 200,000 requests per second in benchmarks. Traffic Server also builds on Mac OS X, Solaris and FreeBSD now...


New release sees a major boost in handling small objects at over 200,000 requests per second in benchmarks. Traffic Server also builds on Mac OS X, Solaris and FreeBSD now


View the original article here

Best Practices for Making Source Code Available Under the GPL

When you release code under the GNU General Public License (GPL), you undertake a specific set of obligations. Many of these obligations, such as providing a copyright notice and a copy of the GPL version you are using, are relatively simple.


When you release code under the GNU General Public License (GPL), you undertake a specific set of obligations. Many of these obligations, such as providing a copyright notice and a copy of the GPL version you are using, are relatively simple. However, the obligation to provide source code with the object code is more complex, because you have several choices about how to fulfill it – and the choice you make can cause ongoing problems, especially if you are not set up to administer it.


View the original article here

Calxeda Announces ARM Server Alliance

Officials with Calxeda, the startup that's building ARM-based chips for low-power data center servers, announced a & Trailblazer& program designed to create an ecosystem around its technology. But, while Calxeda touted support from Ubuntu Linux sponsor Canonical, among other companies, there's been no hint from Microsoft that it will create...


Officials with Calxeda, the startup that's building ARM-based chips for low-power data center servers, announced a & Trailblazer& program designed to create an ecosystem around its technology. But, while Calxeda touted support from Ubuntu Linux sponsor Canonical, among other companies, there's been no hint from Microsoft that it will create a server edition of its ARM-based & Windows 8.& ...


View the original article here

Can Project Harmony Streamline Rules for Open Source Contributions?

According to the Project Harmony page: "Project Harmony is a community-centered group focused on contributor agreements for free and open source software (FOSS). As a group, we represent a diverse collection of perspectives, experiences, communities, projects, non-profit and for-profit entities. In that diversity, we share a common belief in the future of FOSS, and a common interest in using our skills (whether they're legal, organizational, editorial, technical, or otherwise) to the benefit of collaborative FOSS communities." The...


OStatic's open source theme of the day today is whether open source contributions are tracking with increases in open source usage, especially by businesses and organizations. In this post, we discussed how many organizations that now use open source aren't giving back at all. On this topic, one of the more interesting projects currently running isn't an open source software development project, but rather a coordinated effort to establish rules and guidelines for making contributions to open source. It's called Project Harmony, is heavily backed by Canonical, and on June 23 its first year of notable effort to establish rules for open source contributions will arrive.


According to the Project Harmony page:



"Project Harmony is a community-centered group focused on contributor agreements for free and open source software (FOSS). As a group, we represent a diverse collection of perspectives, experiences, communities, projects, non-profit and for-profit entities. In that diversity, we share a common belief in the future of FOSS, and a common interest in using our skills (whether they're legal, organizational, editorial, technical, or otherwise) to the benefit of collaborative FOSS communities."


View the original article here

Jumat, 24 Juni 2011

Chrome May Become Ubuntu's Browser

Canonical founder Mark Shuttleworth says there is "a real possibility" that Chrome will replace Firefox as the bundled browser in future distributions of the Linux operating system.


Canonical founder Mark Shuttleworth says there is "a real possibility" that Chrome will replace Firefox as the bundled browser in future distributions of the Linux operating system.


View the original article here

Cloud Adoption Survey Says Linux is OS of Choice

Cloud.com, BitRock, and Zenoss have surveyed more than 500 members of the open source and systems management community about trends in cloud computing and users' preferences and plans. The result? There's a strong correlation between open source and cloud usage — and the survey found that Linux looms large in plans for deployments.


Cloud.com, BitRock, and Zenoss have surveyed more than 500 members of the open source and systems management community about trends in cloud computing and users' preferences and plans. The result? There's a strong correlation between open source and cloud usage — and the survey found that Linux looms large in plans for deployments.


The survey was taken by 521 IT professionals in a broad variety of institutions, with 9% working for public companies, 51% working for private / privately-held companies, 11% working in educational institutions, 5% in government, and 4% at non-profits. The respondents range from CTOs (11%), IT managers (18%), to technical support (7%) and developers (12%).


Planning for cloud infrastructure varies widely, with only 7% having an "approved cloud computing strategy," and 20% with "no plans to develop" a cloud computing strategy. About 44% of the respondents have at least a partial or fully developed strategy for cloud computing and — good news for the marketing folks — 32% are still gathering input for their 2011 cloud computing strategy. (Though the survey was run earlier this year, so it may well be that the ones gathering input earlier in the year are now finished.)


Now that we have a profile of the people responding, let's take a look at the results. One of the most interesting, here at Linux.com at least, is the OS that respondents plan to run. Overwhelmingly, Linux was on the shopping list for 83% of the respondents — compared to 66% for Windows, 8% looking to BSD, only 5% for Solaris, and 12% choosing "other." Naturally, many shops are looking at mixed deployments to satisfy needs for applications that run only on Linux or Windows, but it's clear from the survey that Linux is doing quite well.


Not just Linux, of course, open source is doing quite well too. Most organizations (69%) plan to use open source "whenever possible," and only 3% of the organizations are against using open source in the cloud.


What do organizations want to do with all this open cloud Linux-based goodness? Right now there's a fairly even mix of plans to use cloud computing for compute (59%), storage (51%), or Platform as a Service (Paas) with 47%.


Application choice for the cloud shows strong interest in content management and Web publishing (57%), document management (39%), and network monitoring and management (34%). See Figure 1 for the chart of results.


The majority of organizations want to run cloud computing on their own hardware, with 57% of the respondents wanting to use their own hardware and facilities. Only 18% wanted to use dedicated hardware at a managed service provider, and 23% of the organizations want to use their own hardware at a service provider using a shared infrastructure.


Why are organizations turning to cloud computing? The reasons are varied, and most organizations have a number of reasons for wanting to use cloud computing. The top reason, at 61%, is scalability. Scalability is followed closely by cost savings (54%), and ease of management (53%).


My favorite reason, redundancy, came in fourth with only 49% of respondents. Greater flexibility also came in at 49%, and elasticity was right there with 48%. It's a bit surprising that elasticity isn't higher on the list, given that scalability features so highly. You'd think that the two go hand-in-hand, with a need to meet fluctuating demand. See Figure 2 for the full results.


The organizations also have some notions about what the cloud is good for. Though only 54% listed cost savings as a reason for cloud computing, 68% believe it will save on hardware costs, and 66% believe it will be faster to deploy infrastructure. And 57% say that it will reduce the burden of systems management. Though less than half of the respondents cite elasticity as a reason for choosing the cloud, 51% say that elasticity is a benefit of cloud computing.


It doesn't look like most of the organizations are depending too heavily on cloud computing just yet. Many of the organizations (61%) plan to use the cloud for development and testing. Far behind development comes Software-as-a-Service (SaaS), with 37% of the organizations planning to use the cloud to offer SaaS. Note that doesn't measure the companies that want to use SaaS that's hosted in the cloud. A third (33%) of the organizations want to use cloud computing to mimic public cloud services behind their firewall, and just 27% want to use cloud computing for High Performance Computing (HPC).


Cloud computing does have some hurdles to overcome. A lot of respondents are worried about the security of the cloud, and inertia (otherwise known as a "conservative IT strategy") is in the way for 30% of organizations. The lead inhibitor, though, is training — 43% of organizations see a lack of cloud training as a problem for deploying cloud computing.


It's also worth noting that regulatory compliance is cited by more than 20% of the organizations. That's worth paying attention to for those companies supplying solutions related to cloud computing. No doubt regulatory compliance features highly on the list of the 9% of public companies that are mulling cloud computing.


Security is also seen as a challenge for management in the cloud with 36% of users saying that security is a headache, while only 12% cited said that performance management is a problem. Configuring guest instances is only a challenge for 10% of the users, and provisioning Linux instances came in dead last at 7%.


Finally, a whopping 53% say that their existing systems management tools do not translate well for managing their cloud computing environment. Something to pay attention to for systems management vendors.


If you're hoping to make use of the survey in your own work, note that the survey results are provided under the Creative Commons Attribution 3.0 Unported (CC BY 3.0) license.


The bottom line? It looks like cloud computing is following a typical adoption pattern. Organizations are finding out what cloud computing is food for, and not. Naturally, Linux is featuring significantly in most organizations plans for cloud computing — as is open source software.


Does this fit your expectations? Tell us in the comments how your organization is using Linux and cloud computing!


View the original article here

Debian Squeeze, Squid, Kerberos/LDAP Authentication, Active Directory Integration And Cyfin Reporter

This document covers setup of a Squid Proxy which will seamlessly integrate with Active Directory for authentication using Kerberos with LDAP as a backup for users not authenticated via Kerberos. Authorisation is managed by Groups in Active Directory. This...


This document covers setup of a Squid Proxy which will seamlessly integrate with Active Directory for authentication using Kerberos with LDAP as a backup for users not authenticated via Kerberos. Authorisation is managed by Groups in Active Directory. This is especially useful for Windows 7 clients which no longer support NTLMv2 without changing the local computer policy. It is capable of using white lists and black lists for site access and restrictions.


View the original article here

Development Release: Scientific Linux 5.6 RC1

Troy Dawson has announced that the first release candidate for Scientific Linux 5.6 is out and ready for testing: "Scientific Linux 5.6 RC 1 is now available. We have pushed out the latest update to Scientific Linux (SL) 5.6. Changed since beta 3: SL 5.6 has a new....


Troy Dawson has announced that the first release candidate for Scientific Linux 5.6 is out and ready for testing: "Scientific Linux 5.6 RC 1 is now available. We have pushed out the latest update to Scientific Linux (SL) 5.6. Changed since beta 3: SL 5.6 has a new....


View the original article here

Development Release: Scientific Linux 6.1 Alpha 1

Troy Dawson has announced the availability of the first alpha release of Scientific Linux 6.1, a distribution built from source packages for Red Hat Enterprise Linux 6.1 and enhanced with extra application useful in academic environments: "The first alpha for Scientific Linux 6.1 has been released. This release....


View the original article here

Distribution Release: Chakra GNU/Linux 2011.04-r1

Phil Miller has announced the release of Chakra GNU/Linux 2011.04-r1, a new respin of the Arch-based desktop distribution: "The Chakra development team is proud to announce the first respin of 'Aida'. Some weeks passed since Chakra 2011.04, we have added lots of package updates, KDE got updated to....


Phil Miller has announced the release of Chakra GNU/Linux 2011.04-r1, a new respin of the Arch-based desktop distribution: "The Chakra development team is proud to announce the first respin of 'Aida'. Some weeks passed since Chakra 2011.04, we have added lots of package updates, KDE got updated to....


View the original article here

Kamis, 23 Juni 2011

Distribution Release: Toorox 06.2011

Jörn Lindau has announced the release of Toorox 06.2011 "GNOME" edition, a Gentoo-based distribution showcasing the new GNOME 3 desktop: "A new version of Toorox 'GNOME' has been finished. This one contains the GNOME esktop 3.0.2. What's new? The kernel is Linux 2.6.39-gentoo and USB 3.0 support has....


Jörn Lindau has announced the release of Toorox 06.2011 "GNOME" edition, a Gentoo-based distribution showcasing the new GNOME 3 desktop: "A new version of Toorox 'GNOME' has been finished. This one contains the GNOME esktop 3.0.2. What's new? The kernel is Linux 2.6.39-gentoo and USB 3.0 support has....


View the original article here

Do Mor with Tor: Running Bridges and Invisible Services

Last time, we took a look at basic browsing with Tor, the anonymizing Web relay network. At the very end of that article, we touched on how to actively participate in Tor by running your own relay. That's when your local copy of Tor functions as a node in the network, funneling encrypted Tor traffic peer-to-peer to help increase the overall Tor network's bandwidth. But there is even more you can do, such as running invisible services and bridges for those who need even more privacy than vanilla Tor provides out of the box.


As a refresher, all active Tor nodes are called "relays" — they pass packets between other relays. Each connection is encrypted, and no relay knows the starting point or ultimate destination of any of the traffic it relays. That's what makes Tor impossible to snoop: the route is calculated out-of-band (so to speak), and no one on the network knows it so no one else can steal it.


But the end-user's HTTP (or IM, or IRC, or whatever else) traffic does have to enter the Tor network somewhere. By default, whenever you launch Tor, it requests addresses of some Tor network "on-ramp" relays. Although the topography of the Tor network is constantly changing, and although the connection between the user and the on-ramp is encrypted, these addresses are public information, so adversaries could still watch the user's connection and interfere somehow — even by crude means such as switching off the user's connectivity.


The solution is to have secret, unpublished on-ramp relays. The Tor project calls them bridges, in order to denote the distinction. How many bridges there are is unknown, because there is no list. The most an ISP or attacker can do to block Tor is cut off access to the public relays, but if a user has the address of a Tor bridge, he or she can still connect.


Running a Tor bridge is as simple as running a normal Tor relay. The simplest way is to install the Vidalia GUI client, which allows you to start and stop Tor functionality on demand. The project recommends you use the latest files directly from them, rather than use a distribution's package management system, because security fixes can take too long to pass through distro review. The Linux/Unix download page links to repositories for Debian-based, RPM-based, and Gentoo distributions, as well as the three BSD flavors and source packages.


Note that this is not the "Browser Bundle" which is geared towards end-users only. You'll need to install the "vidalia" package, which will pull in the necessary Tor dependencies. Launch Vidalia, then choose the "Setup Relaying" button. Selecting "Relay traffic for the Tor network" configures your node as a standard relay. "Help censored users reach the Tor network" is the bridge option.


There are a few options to consider in the "Basic Settings" tab. Stick with the default relay port (9001) unless you know that your ISP blocks it. Unless you have a compelling reason not to, the project also wants you to provide some sort of contact information — but it is not published. Your IP address and port number are all that Tor users see. By default, you should check "Mirror the Relay Directory," because this is how Tor users establish connections. At the very bottom, you see "Automatically distribute my bridge address." To run a generic bridge, leave this checked. If, however, you are setting up your bridge for the benefit of some particular friend (including yourself), you can leave it unchecked — but you will need to tell the person in question your bridge IP address and port number.


You'll notice that the "Exit Policies" tab is grayed-out when you configure a bridge. When running a normal relay, you can set options here to limit access to particular types of traffic or block specific site requests from exiting the network at your node. But since a bridge is an entry point, those options do not apply.


That's all there is to bridge setup. To use a bridge as your own entry point to the Tor network, visit Vidalia's Network tab. Check the "My ISP blocks connections to the Tor network" option, which will reveal a list box where you can enter individual bridges. If someone you know is running an unpublished bridge, you can enter it directly. Otherwise, you will need to request bridge information from the Tor project.


How that works securely is a bit complicated. You can request a bridge list by visiting a special SSL-encrypted page on the Tor site; my understanding is that the project keeps track of what bridges it sends to what requesting IPs, so that evildoers cannot harvest the entire bridge collection. You can also send an email to the Tor project, and as long as you use one of the few well-known email address domains, it will return a set of bridge IDs. I assume that this information is also tracked; how to allow access to bridges without compromising their security is a hard problem.


But however you get them, simply enter the bridge IP:port information into Vidalia's Network tab, and you can browse and network without getting blocked. All bridge IDs consist of a an IP address and port number separated by a colon, and optionally can provide a cryptographic fingerprint, although this feature does not seem to be in widespread usage.


Essentially, bridges simply offer an alternate, harder-to-block access method to the Tor network. A more intriguing use of the software is to run an IP-based service that can only be accessed through Tor (as opposed to the Internet at large). You can publish a Web site, run a POP/IMAP/IRC server, or even make an SSH server accessible, all without ever revealing your address to visitors, and even from behind a firewall.


How is that possible? The actual traffic is routed through the Tor network, just like any other Tor data. The tricky part is making the service reachable. Tor does this by maintaining a distributed hash table of services, each of which is identified by a pseudo-random host name in the .onion domain. Whenever a new service launches, it connects to a few Tor relays (like any other relay would), then tells the hash database which relays those are. When a client makes a request to the ABCDEFWXYZ.onion host, the hash database picks one of the relays associated with the service and forwards the request on. The relays involved never know that the packets they are carrying are destined for a particular service, because the data is mixed in with all other Tor-based, encrypted traffic.


There are a few other checks-and-balances involved to protect everyone; if you're interested, the entire protocol is documented on the Tor Web site. There you can also find a link to the Tor hidden service search engine (based on DuckDuckGo), as well as an example Web site run by the project. A key point to remember, however, is that you must be running Tor on the client side to access these services, because they are accessible only within the Tor network.


It is also important to remember that the hidden service should probably only connect to Tor on the server-side, too — in can be extremely tricky to try and run a normal Web server setup and a Tor-based .onion site from the same Apache configuration, plus, someone who finds the hidden content on your existing IP could then prove that you are the host, which defeats the purpose of running a "hidden" service entirely.


Tor recommends you take a look at a lightweight Web server like thttpd. Whichever HTTP server you choose, you should make it accessible only to localhost. Next, in your .torrc configuration file, find the location-hidden services block, and add a pair of lines like

HiddenServiceDir /some/path/to/a/place/where/you/can/keep/files/for.your/hidden_service/HiddenServicePort 80 127.0.0.1:5222

The HiddenServiceDir directory is merely a location where Tor will dump a text file containing the .onion address for your service. The HiddenServicePort line has three parts: the "fake" port number advertised to visitors (80 here, to serve as a standard Web server), the address to bind to (here, 127.0.0.1, which is localhost), and the local port number (5222). You can also provide this information in Vidalia, in the Setup: Services tab.


Now, when you restart Tor, it will fetch a .onion host name for you, and save a private key file in your HiddenServiceDir directory. This key verifies that you are, in fact, the service listed in the distributed hash database, so that clients can connect with confidence — so don't lose it. That's all there is to it; you can set up as many services as you like, running anything that you care to configure and that can be ferried by Tor.


How you spread the word about your service is another matter — if you post about it on the public Internet, your foes can almost certainly associate you with it. There are in-Tor-only message boards, however, as well as community forums where people often post links to .onion services. Of course, that's assuming you want to publish your content. As with bridges, you may also need to make something available only to specific people, or only for a short amount of time, in which case person-to-person is probably best.


There is definitely a trade-off involved with both of these techniques. You cannot simply run an invisible Tor bridge and expect dissidents to find it and use it — they will have to set up and run Tor. Likewise, you cannot run an anonymous Web server dishing out Truth by the barrel-full to the whole wide world — you can only make it accessible to other people running Tor. Nevertheless, these are both exciting opportunities that without Tor wouldn't exist at all. The initial Tor concept didn't include either — it just goes to show you that a solid technology like Tor has more and better uses than casual Web surfing, as long as users are willing to push the boundaries. Who knows what else can be built on top of Tor?


View the original article here

First Step Towards openSUSE 12.1 with Milestone 1

Milestone 1, the first step towards the upcoming openSUSE 12.1 release, is now available. It is the first milestone, hence far from stable, but the images are now finally building, so we have a good starting point for further development.


With over 800 updates, including minor and major updates, the current milestone is ready for some serious testing. This iteration already sees some major upgrades taking place, with the kernel moving on to 2.6.39 and GNOME to 3.0. In addition we have popular GNOME applications like Evolution, Eye of GNOME and others all synchronized, and KDE’s Plasma Desktop coming along nicely with a minor version upgrade to 4.6.3. You will also find upgrades to GCC, glibc, Perl, Python, and the RPM package manager. Much work has also been put into the much-lauded systemd which has now been upgraded to version 26.


You can read some info on the progress in this recent blog on progress in Factory by Andreas Jaeger.


As expected from a development release, there is still a lot of work to do, so your input at this early stage will be a huge help in making the final release into the beautifully polished work that we aim for. openSUSE 12.1 Milestone 1 has a list of most annoying bugs here, please add issues you find and help fix them. As Will Stephenson recently blogged, fixing an issue is a matter of BURPing on build.opensuse.org! Find a how-to here.


View the original article here

Friday Five: Linux Stories for the Weekend

It's Friday, and your body may still be at work — but your brain has checked out for the weekend. Let's give it something to do, by checking out these five posts you might have missed over the week.

It's Friday, and your body may still be at work — but your brain has checked out for the weekend. Let's give it something to do, by checking out these five posts you might have missed over the week.

Here at Linux.com we link to the stories of the day related to Linux and open source. But sometimes I run into posts and articles that don't quite fit our news categories, or maybe they're just worth calling out in particular. So I wanted to try something new, and post five pieces on Friday that are really worth reading and thinking about. (Hat tip to Ron Miller, from whom I've borrowed the idea...)

Why a JavaScript hater thinks everyone needs to learn JavaScript in the next year: A strong argument in favor of JavaScript, food for thought for anybody who's thinking about learning a new (or their first) programming language.

Samba 3.6 release soon, Samba 4 pushed to late 2011, 2012: Paula Rooney looks at the upcoming Samba release and the long road to Samba 4.0.

Rebooting: Matthew Garrett looks at what happens when you reset a computer. (Technically, this was the week prior, but it's interesting and this is the first week I'm doing this feature...)

Presenting GNOME Contacts: This looks pretty snazzy. Allan Day previews GNOME Contacts, a feature for GNOME 3.2 — and a bunch of mockups that look quite nice.

Living off Freedom: Lars Wirzenius, a longtime Debian contributor, writes about being laid off and pondering doing crowd-funded free software development. Would you pay someone to develop free software?

And of course, I'm sure you've checked out today's Weekend Project on Xfce from Carla Schroder, and my piece from earlier this week on Linux Learners' Student Day, and our other tutorials. Thoughts or comments? Suggestions for next week's five? Let me know, and have a great weekend!


View the original article here

From the MeeGo Conference: The State of MeeGo

Last week I was in San Francisco for MeeGoConf SF, the second large-scale MeeGo event. A lot has changed since the Dublin get-together last November — or at least that's how it looks from the outside. Nokia (one of the co-founders of the project) hired on a new CEO from Microsoft, who announced in February that the Finnish phone maker would start using Microsoft's Windows Phone 7 instead of its own smartphone operating systems. To a lot of mobile-phone-industry watchers, that looked like bad news for MeeGo, and it certainly disappointed a huge portion of Nokia's MeeGo and Qt engineers, not to mention Maemo fans. But there is more to the MeeGo picture, which frames those events in a different light — as last week's event showed.


The truth is handsets aren't the whole story for MeeGo — they're simply the current darling platform of the gadget blog set. In fact, they may not make up a significant portion of MeeGo's revenue stream for device makers, considering that the margins on handsets get smaller and smaller all the time. The Linux Foundation's Jim Zemlin raised that point in Monday's keynote (note that the LF hosts Linux.com, in addition to curating MeeGo project resources, although it does not provide donations or engineering resources), which featured a cavalcade of industry-types and community hackers showing off the latest work in MeeGo's 1.2 release.


What is a high-margin and ever-growing business is selling software services across non-PC computing devices. Everything from games to books to specialty content to cloud-based music and storage — and it depends on a user installing an app on some device with a screen, an OS, but no keyboard. Right now, most consumers in the US think of these services on phones, and to a lesser degree tablets. But they're only thinking about today. It won't be long before connected televisions are commonplace instead of a high-end novelty, kids are demanding games and social apps in the back seat of the minivan, and a slew of other appliances need to connect to something, somewhere. When the other non-PC platforms catch up to handsets in volume, what are they going to run under the hood?


Zemlin made a very strong case for Linux being the answer, with twenty minutes' worth of slides and IDC analysis to substantiate it. Lower development cost, faster time-to-market, all the usual reasons any open source fan already knows. But buried deep within the statistics was an easy-to-overlook point that only MeeGo has going for it: when non-PC computing is pervasive, service vendors are not going to want to re-write their applications for every device.


That's MeeGo's secret weapon: because the core OS is the same, all applications are compatible across all of the deployment platforms. Right now, even the other embedded Linux vendors aren't pursuing cross-device compatibility (see LiMo or Mobilinux, for example).


In contrast, there were MeeGo vendors from a wide variety of hardware angles on display in San Francisco. Lots of tablets from the likes of Intel and WeTab, plus set-top boxes, car navigation units (several already on the road in China, plus Nissan's Chief Service Architect Tsuguo Nobe dropped by to announce the Japanese car-maker was adopting MeeGo), and even music consoles.


Without a doubt, Nokia's decision to ship Windows Phone 7 on its next round of smartphones (it still has one MeeGo phone already nearing completion scheduled to be released this year) looks dour. It makes some people think the phone-maker didn't find Linux and MeeGo up-to-snuff, and (worse), it keeps devices off the market. But the other MeeGo "verticals" don't seem to be affected in the least.


Of course, "the other verticals" essentially means OEMs: hardware device manufacturers. Most of them are interested in the MeeGo Core platform, with an eye towards customizing the interface to fit their own branding and "product differentiation" strategies. What is certainly more important to MeeGo's viability is the health of the developer and contributor community, which makes for another interesting MeeGo Conf assessment.


By all reports, turnout at MeeGo Conf SF was higher than it was last fall in Dublin (an unofficial estimate pegged it at 850; it is trickier to count because as a free event there are always no-shows and people who register, grab a badge, then wander away). Partly the higher attendance reflects the more tech-centric location, but the really interesting factoid was that attendance was "significantly" up among the non-sponsor-attendees — meaning the community.


Close to half of the program was targeted at developers: the application framework, designing interfaces for multiple devices, the build and packaging systems, etc. Based on session attendance and conversations, the MeeGo developer community remains fired up about the platform. On the other hand, it is also frustrated at the lack of commercial MeeGo-based consumer products. Set-top boxes and car dashboard unit are good for the foundations of the project, but hardly generate buzz. Most of the community members I talked to were resigned to the fact that public perception of the project is simply going to stall until more devices reach users. They do seem to be using the out-of-the-spotlight time wisely, however, working on the QA process and infrastructure.


But there are two areas where the project leadership does not seem to be getting its message out to the broader open source community. The first is the compatibility between MeeGo and desktop Linux. While the core set of APIs is smaller, by and large porting desktop applications to MeeGo is not difficult, thanks to the availability of Qt, GTK+, and the usual Linux stack underneath. Yet there remains a perception that MeeGo is a different beast, and most ports of desktop applications to the platform come from MeeGo community volunteers, not the upstream projects themselves.


The second message misfire surrounds the demo UX layers. Officially, the screenshots you see of tablet, handset, and even IVI MeeGo front-ends are "reference" designs: the project expects device makers to customize (or even custom-build) their own user interface layers. That concept is a difficult one for the outside world to grasp; you routinely see reviews and criticism of the look and feel or application offerings in the reference UXes, and some of them — netbook in particular — are actually in regular use. By leaving them in the bare-bones, not-quite-polished state that ship in the semi-annual releases, the public at large is getting a bad impression.


The "reference only" concept is probably a relic of Nokia's involvement; the phone maker steadfastly kept its own UI layer closed-source so that it could "differentiate" itself in the market. That's a fair enough concern, but the rest of the project doesn't need to let "unpolished" remain the public face of the MeeGo UX. Slicker UX releases can only help build excitement.


Luckily, there does seem to be some movement on that point; the N900 Developer Edition team is a volunteer squad building a more polished, usable MeeGo Handset experience for the Nokia N900 hardware. Better still, it is providing its changes back to the project. The community itself can build a slick UX layer.


Ultimately, as the hallway consensus indicated, MeeGo will probably continue to have a bit of a public perception issue as long as no mass-market phones and tablets are shipping for the gadget-hungry consumer sector. That's too bad, but that's life. It's good to see that the community is taking it in stride, however, and actually committing its time towards improving the platform. Android and Apple both had to wait until after their devices launched to start building a developer ecosystem: MeeGo actually has an advantage because it already has one just waiting for the hardware to hit the shelf.


View the original article here

Get Ready for LibreOffice 3.4

LibreOffice 3.4 is approaching. The second release candidate for 3.4 was released on May 27, and has improvements for Writer, Calc, and much more. Ready for a look?


The upcoming release of LibreOffice 3.4 is slightly overshadowed by the announcement that Oracle is proposing OpenOffice.org as an Apache Incubator project. What does that mean for the free office suite landscape? It's far too soon to tell, though Apache president Jim Jagielski has reached out to The Document Foundation about cooperation. I'm cautiously optimistic that the projects will find a way to work together and benefit the rest of the FOSS community.


But for now, LibreOffice is the only project with an imminent release — so let's take a look at that and what's in store.


LibreOffice is focusing on more modest, time-based releases. This means that 3.4 doesn't have massive new features, but it does have a slew of performance improvements and minor new features that make life a little better. Let's take a look at some of the highlights.


Sadly, the LibreOffice folks still haven't implemented vi-like keybindings for Writer. (OK, that may only be sad for some of us, but still...) But Writer still has a few minor features that you might enjoy.


If you do a lot of footnotes and bullets, you're going to find this release interesting. LibreOffice now has support for Greek (upper and lower case) letters for bullets — not something that I've had call for yet, but might be of interest to some users. (Testing this feature shows that I'm not, in fact, up on my Greek alphabet...) You'll find this in the Options tab of the Bullets and Numbering dialog.


If you're working on a paper or document that will be printed in color, or distributed as a PDF, you now have the option of defining the style and color of footnote separator. You'll find that one in the Footnote tab of the Page Style dialog.


The LibreOffice folks have also been working on "flat ODF" import and export filters — so if you have a need for the .fodt document type, you might want to check this out. What's flat ODF? In a nutshell, it's uncompressed ODF — the standard ODF document is a zipped file with XML data. Most users probably will want to stick with the traditional ODF — but this is a way to use LibreOffice to produce documents that can be worked with by other programs.


The Pivot Table support in Calc has been stepped up a notch in 3.4, and heavy spreadsheet users may want to look at upgrading to 3.4 right away. You now have support for unlimited fields (as opposed to a limit of 8 fields) using Pivot Table. The Pivot Table feature now allows users to define named ranges as a data source as well.


The 3.4 release also adds support for OLE links in Excel documents — so if you're working with a lot of Excel documents, this means that you won't be seeing import errors from Excel docs with OLE links.


A couple of features have been refined to allow per-sheet support as opposed to global document support. Autofilter and named ranges can now be defined on a per-sheet basis rather than being applied to the entire document.


Are you an Ubuntu Unity user? If so, you now have support for the global menu.


The 3.4 release also has a few features for improved graphite font handling, and for drawing text with Cairo as well as improved GTK+ theme support. This means that LibreOffice should look much nicer than 3.3 as a native Linux app.


Do you do presentations, and want to put them up on the Web? (One of the first — and most annoying — questions I get when doing a presentation is "will the slides be online?") This has been, let's say, not one of LibreOffice/OpenOffice.org's strong points. I tried it out with a couple of my old presentations, and it works like a charm now. So if you need to put a presentation online, LibreOffice 3.4 has you covered.


There's also the usual under the hood improvements, bug fixes, and so on. The 3.4 release is not a big leap forward — but it's an improvement and seems stable enough for most users to dive in.


Remember, the LibreOffice project recommends the .0 releases for more adventurous users. If you're wanting to contribute to LibreOffice, or just like to live a bit closer to the edge, the 3.4.0 release is for you. Odds are, if you're reading this article you like to try new features and want to be running the latest and greatest. But if not, then just hang on until the latest LibreOffice turns up in your favorite Linux distribution or at least wait for one of the point releases (like 3.4.1 or 3.4.2) that have cleaned up any nagging bugs that slipped through in 3.4.0.


According to the release notes, you should be able to install 3.4 side-by-side with 3.3. Of course, I read this after I removed the 3.3 packages from Linux Mint and installed 3.4 — but it should save you some trouble if you want to test 3.4 without removing the older release.


Naturally, you'll find packages for most major Linux distributions — the pre-release page has RPM and Debian packages for 32- and 64-bit systems.


The release plan calls for 3.4.1 to be out in late June, and for 3.4.2 to be released in late July. The next major release of LibreOffice is set for next February. Whether the OpenOffice.org news will impact LibreOffice releases, if at all, is unclear. With LibreOffice ramping up, OpenOffice.org apparently moving to the Apache Software Foundation, and Calligra picking up steam, it looks to be an interesting time for free office suites.


View the original article here