Oct
13

Fujitsu Artificial Market Segmentation (or, how to make a Mac ScanSnap FI-5110EOXM work in Windows 8.1)

The Scanner

Years ago I was looking for a cheap ADF scanner so that I could pretend to transition to a paperless home office.  I settled on the FI-5110EOXM, which is a Fujitsu ScanSnap scanner branded for Mac usage only.  I was running Ubuntu at the time, so I went with this model instead of the more expensive FI-5110EOX.  That model appeared to be the exact same device, but it came with Windows drivers and cost much more.

 

The Problem

Years went by, and I eventually wanted to use my Mac scanner with a PC.  Looking into it, there is absolutely no supported way to make this work.  According to Fujitsu, if I wanted to scan on a PC then I’d need a Windows scanner instead of my Mac scanner.

Even more insultingly, the Windows version of the scanner did not actually support TWAIN, WIA, or ISIS (the two industry standard scanning protocols).  In order to do that, I’d actually need to buy their higher end models and spend hundreds of dollars just to be able to use third-party scanning software with the hardware I’ve purchased.

Of course, none of these scanners are supported under Windows 8…

The Solution

How many scanners does Fujitsu actually make, anyway?

Since I’m unwilling to buy another scanner when I’ve already bought a perfectly good one, I started thinking about what was really going on here.  I had a chunk of Fujitsu scanning hardware that supposedly cannot be used on a Windows machine or with standard protocols.  For just a few hundred more dollars all of those problems magically disappear…  So, either Fujitsu actually has an entirely different manufacturing process wherein they make standard scanners instead of crippled scanners or there’s some funny business going on here in the software…

Tech Details

My FI-5110EOXM has a hardware ID of VID_04C5&PID_10F2, and that hardware ID is not supported by anything other than the single Mac driver software.  However, I was betting that the underlying hardware actually could do a lot more than that.

So looking around a bit further, I came across the FI-5110C.  This is from the “Workgroup” line of scanners, which has a whole lot more features and actually supports TWAIN and ISIS standards.  The FI-5110C seems to have a suspiciously similar model number to my FI-5110EOXM, so let’s see how similar these are under the hood.

Model Number Reassignment

So now we have a concrete task set up:  I need to take my FI-5110EOXM scanner and convince my computer that it’s actually a FI-5110C.  This is actually pretty simple…

  1. Get the drivers for the FI-5110C from Fujitsu.  I’d recommend using the TWAIN driver.
  2. When you run the downloaded EXE, it will extract a bunch of stuff into a directory called Disk1
  3. Go into the Disk1/Sub folder, and run the setup.exe installer there.  This will mostly just copy a bunch of drivers all over the place.  The one we want gets put into C:\Windows\fjmini (note that I’m running Windows 8 x64, so you may find some differences here and there).
  4. The file we care about is “C:\Windows\fjmini\fi5110C-x64.inf”, which describes exactly how Windows can tell when it’s working with a FI-5110C (instead of, say, a FI-5110EOXM).
  5. We’re going to edit this file, so you may need to run Notepad as Administrator or otherwise change the the permissions.
  6. Look for the section that specifies the Hardware ID of the FI-5110C, which looks like this:
[Models.NTamd64]
%USB\FUJITSU_fi-5110CdjU______0.DeviceDesc% = FI5110U.Scanner,USB\VID_04C5&PID_1097
  1. So now we change this ID so it matches our actual device:
[Models.NTamd64]
%USB\FUJITSU_fi-5110CdjU______0.DeviceDesc% = FI5110U.Scanner,USB\VID_04C5&PID_10F2
  1. Done! (Note that the change was from 1097 to 10F2)
  2. Now, we just need to tell Windows to use this driver.
  3. Plug in the scanner, open up Device Manager, and find the Scanner (which should have the “malfunction” icon attached to it).
  4. Right click, select update driver, select browse for file.  You can now browse over to C:\Windows\fjmini and hit OK.
    1. Note, at this point you might get a fatal error stating that hash values don’t match.  Windows doesn’t like installing modified drivers, so you’ll have to disable those checks by following these instructions
  5. If all goes well, you’ll now see that you’ve successfully installed your “FI-5110cdj” scanner, which supports TWAIN and has all the features of a much more expensive model (and works under Windows 8.1)!

The Lesson?

Companies artificially cripple their products all the time in order to create a “low end, mid range, and high end” set of markets.  If companies would just charge according to how much it actually costs to make an item then maybe things would be much easier….

Apr
30

MATLAB: Easily perform large batch jobs on multiple machines (without MATLAB Distributed Computing Server)

Author Robert Coop    Category Uncategorized     Tags

The Problem: Too Many Test Runs, Too Little Time

I’ve often run into a problem where I need to test many different parameter settings in an experiment.  Typically, I need to test some sets of possible parameters, and I need to run many random repetitions for each parameter set in order to calculate statistical goodness.  (You are using statistical significance in all of your experiments, right?)

In my lab, we have access to a decent amount of computing resources.  We have 14 dedicated machines with 80 cores spread amongst them.  However we do not have a batch job dispatcher, nor do we have a copy of MATLAB’s Distributed Computing Server.  So, I have to SSH into each machine, start multiple copies of MATLAB, set up the parameters, run the test, and collate the results.

Until now…

Coordinating MATLAB Workers Across Machines

Here’s how to pull this off.  You’ll need a pool of machines with access to a common file system.

The MATLAB Batch Job

The MATLAB batch command allows one to queue a job for execution by a MATLAB parallel worker.  It is very useful for running copies of a script in parallel (locally), but it requires a bit of finagling in order to use this to coordinate workers between machines.  An important detail about the batch command is that it uses the local scheduler;  the local scheduler (by default) stores job information in the .matlab directory in your home directory.  If you have a home shared across machines, then this behavior is not what we want.  It will cause conflicts among

Oct
4

Using PS to output human readable memory usage for each process using AWK

I ran into a little issue: I wanted to list the memory usage of each process, but I wanted it to be output in human readable form (as in “df -h”).  So, after a little Googling I found a forum thread which put the idea of using awk.  A little bit later I had the solution:

Here’s what you see with normal ps:

$ ps u
 
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
kes1 12394 0.0 0.0 107356 2052 pts/1 Ss 08:15 0:00 -sh
kes1 12648 0.5 3.2 720808 527356 pts/0 S+ 08:16 0:16 /usr/lib64/R/bin/exec/R
kes1 13682 0.0 0.0 107360 2016 pts/3 Ss 08:18 0:00 -sh

Here’s what we want to see:

USER    PID     %CPU    %MEM            VSZ             RSS     TTY     STAT    START   TIME    COMMAND
kes1    12394   0.0     0.0            104.84Mb          2.00Mb  pts/1   Ss      08:15   0:00    -sh
kes1    12648   0.7     3.2            703.91Mb        515.00Mb  pts/0   S+      08:16   0:16    /usr/lib64/R/bin/exec/R
kes1    13682   0.0     0.0            104.84Mb          1.97Mb  pts/3   Ss      08:18   0:00    -sh

And here’s how we make that happen with awk:

ps u | awk '{
for ( x=1 ; x<=4 ; x++ ) { printf("%s\t",$x) } 
for ( x=5 ; x<=6 ; x++ ) {  if (NR&gt;1) { printf("%13.2fMb\t",hr=$x/1024) }  
else { printf("\t%s\t",$x)  } 
}
for ( x=7 ; x<=10 ; x++ ) { printf("%s\t",$x) } 
for ( x=11 ; x<=NF ; x++ ) { printf("%s ",$x) } 
print "" 
}'

What does this awk script do?

There are a few different things going on here.  First, we want to print out all the non-memory fields as tab-separated text.  Second, we want to print out the memory as formatted kb and not bytes.  Also, we want everything to line up. Finally, we want to print out the process name normally (using space and not using tab separation).

First, the script prints the first 4 fields:

for ( x=1 ; x<=4 ; x++ ) { printf("%s\t",$x) }

Then, we check to see if this is the header line or not. If it is the header line, then NR (the number of records read) will be 1. If it’s not, then NR will be greater than 1. For the header, we print out the text (adding some more tabs to line things up). For the memory fields, we use printf to format the byte count as human readable text.

for ( x=5 ; x<=6 ; x++ ) {  
if (NR>1) { printf("%13.2fMb\t",hr=$x/1024) }  
else { printf("\t%s\t",$x)  } 
}

Finally, we print the remaining PS fields as before, but then we switch to space-delimited fields for the process name and arguments, making sure to add a newline at the very end.

for ( x=7 ; x<=10 ; x++ ) { printf("%s\t",$x) } 
for ( x=11 ; x<=NF ; x++ ) { printf("%s ",$x) } 
print ""

Nothing to it!

Feb
27

Building SOAP services on Google App Engine in Java: Part 1 – The Servlet

Background

Google App Engine (GAE) is an interesting platform that aims to allow for the development and hosting of Java, Python, and Go web applications.  The pricing scheme is particularly nice:  low-usage applications are hosted for free, pricing is solely dependent on resource usage (with the ability to set budgets and limits), and the minimum charge is only $2.10 per week.  No, I’m not affiliated with or being compensated by Google, but I do think this platform has some potential.

Google supplies a nice Eclipse plugin for use with the Google Web Toolkit and the App Engine, which allows for pretty easy construction of your standard web applications.  I’ve heard that it’s fairly easy to construct user interfaces, JSP pages, etc., but that’s not what we’re talking about here.

The Task

How can we build an application that uses the Google App Engine which uses SOAP for an API?  The primary problem (for me) is that one cannot use the Axis2 libraries (on the server side) due to permission issues, so things must be done a little more manually.

Google has an article covering the topic, but I wanted to do things slightly differently.  Specifically, I wanted to be able to create my service class in Java, generate the WSDL (with appropriately named fields in the XML types – not Parameter1,2,…), generate XML types which will be shared between the server and client projects, and to do it in such a way that I do not have to worry about keeping track of QNAMEs or other low-level details.

Requirements

For this project, I use:

• Eclipse for development, with the Eclipse plugin for GAE

• Axis2 for the client class (also for the java2wsdl and wsdl2java tools)

• JAXB for XML types (auto-generated by Axis2)

The Server

Overview of the Process

Here’s the logic flow for the server:

1. SOAP request arrives via Post into a HttpServlet.

2. HttpServlet builds a SOAPMessage for the request, and passes it to the SoapHandler class.

3. SoapHandler unmarshalls the XML message object, detects its type, and passes a specific application request to the ServiceAdapter class.

4. ServiceAdapter translates from the XML request message (in a safe fashion) to the Service method call.

5. Service method is run, business logic is performed, all the magic happens.

6. ServiceAdapter translates Service method response to XML response message.

7. SoapHandler builds SOAPMessage response (including catching thrown exceptions and turning them into SOAP Fault messages).

8. HttpServlet writes response to client.

All of that seems like a lot of work, but it’s pretty straightforward.  The best part?  I’ll give you the skeleton for all of this; there’s not much application-specific content in all of this.  The process for adding a method to your service class is: implement service class method, run Axis2 generators, modify/fix Axis2 client (the same every time, some auto-gen code must be changed), add 2 lines to SoapHandler, add an adaption method to SoapAdapter, and add a method to the client class.  It’s easier than it sounds.

Part 1 – The HttpServlet

Good news!  There’s no service specific code here.  I’ll just post the relevant parts of what I use:

public class SOAPServerServlet extends HttpServlet {
	public static final String URL = "http://ms4models.appspot.com/service";

	private static final Logger log = Logger
		.getLogger(SOAPServerServlet.class.getName());
	private static MessageFactory messageFactory;
	private static ServiceSOAPHandler soapHandler;

	static {
		try {
			messageFactory = MessageFactory.newInstance();
			soapHandler = new ServiceSOAPHandler();
		} catch (Exception ex) {
			throw new RuntimeException(ex);
		}
	}

	@Override
	public void doPost(HttpServletRequest req, HttpServletResponse resp)
			throws IOException {
		log.info("SOAPAction: " + req.getHeader("SOAPAction"));
		log.entering("SOAPServerServlet", "doPost", new Object[] {
				req, resp });
		try {
			// Get all the headers from the HTTP request
			MimeHeaders headers = getHeaders(req);

			// Construct a SOAPMessage from the XML in the request body
			InputStream is = req.getInputStream();
			SOAPMessage soapRequest = messageFactory.createMessage(headers, is);

			ByteArrayOutputStream out = new ByteArrayOutputStream();
			soapRequest.writeTo(out);
			String strMsg = new String(out.toByteArray());

			log.finer("SOAP request: " + strMsg);

			// Handle soapReqest
			SOAPMessage soapResponse = soapHandler
					.handleSOAPRequest(soapRequest);

			// Write to HttpServeltResponse
			resp.setStatus(HttpServletResponse.SC_OK);
			resp.setContentType("text/xml;charset=\"utf-8\"");
			OutputStream os = resp.getOutputStream();
			soapResponse.writeTo(os);
			os.flush();
		} catch (SOAPException e) {
			throw new IOException("Exception while creating SOAP message.", e);
		}
		log.exiting("SOAPServerServlet", "doPost");
	}

	@SuppressWarnings("rawtypes")
	static MimeHeaders getHeaders(HttpServletRequest req) {
		Enumeration headerNames = req.getHeaderNames();
		MimeHeaders headers = new MimeHeaders();
		while (headerNames.hasMoreElements()) {
			String headerName = (String) headerNames.nextElement();
			String headerValue = req.getHeader(headerName);
			StringTokenizer values = new StringTokenizer(headerValue, ",");
			while (values.hasMoreTokens()) {
				headers.addHeader(headerName, values.nextToken().trim());
			}
		}
		return headers;
	}
}

Leave a comment if you have a question about what’s going on, I’ll not explain it for brevity’s sake (also: because I am lazy).

Coming Soon – Part 2

Aug
9

Eclipse RCP: Setting P2 repositories (update sites) programmatically (for when p2.inf fails).

When deploying a p2 enabled RCP product, you often want to set the update sites that can be used so that they include yours.  There are many resources that explain about using p2.inf to do this.  However, there is an issue when trying to do this with an application that will be installed into Program Files (or any non-user-writable location) on the user’s machine.  p2.inf operates by writing the repository to disk the first time that the RCP application is run.  If the application is not user-writable, this will silently fail.

To set the repositories manually, you can use the same technique as the P2 property page:

import org.eclipse.equinox.internal.p2.ui.model.MetadataRepositoryElement;
import org.eclipse.equinox.internal.p2.ui.model.ElementUtils;

@SuppressWarnings("restriction")
public class P2Util {
  private static String UPDATE_SITE = "http://www.example.com/update_site";

  public static void setRepositories() throws InvocationTargetException {
    try {
      final MetadataRepositoryElement element = new MetadataRepositoryElement(null, new URI(UPDATE_SITE), true);
      ElementUtils.updateRepositoryUsingElements(new MetadataRepositoryElement[] {element}, null);
    } catch (URISyntaxException e) {
      e.printStackTrace();
      throw new InvocationTargetException(e);
    }
  }
}

Note that you need all of the repositories to be present in the

ElementUtils.updateRepositoryUsingElements

call; repositories not in that array will be removed.

Jun
3

Easily load Xtext files and objects in Eclipse plugin or RCP projects using adapters

Motivation

I’m currently working on a fairly involved Eclipse RCP project which makes heavy use of Xtext for grammar parsing.  One task that’s come up fairly often is that we need to be able to perform some action on a model object contained in an Xtext file.  For example, we want to provide context menu commands to open up a view to display the structure of a model; to do this we need to load the file, parse it with Xtext, and pass the model object off to the view.

The Xtext FAQ provides instructions for doing this in a standalone Java here, but it involves calling the StandaloneSetup method to create and obtain an injector, which isn’t the best for performance.  If we’re already in a RCP application that includes our Xtext language’s UI plugin, we can efficiently obtain the injector from the activator.  In addition to this, we can wrap the whole thing in a class that ties into Eclipse’s adapter manager framework so that other plugins can load our Xtext models without needing a direct dependency on the UI plugin (though of course they still need to depend on the plugin that  defines the model classes).

The Eclipse adapter framework: IAdapterFactory, IAdaptable, and IAdapterManager

In a nutshell, the Eclipse adapter framework works by letting plugins register ‘adapter factory’ classes (that implement IAdapterFactory).  These classes are designed to attempt the singular task of taking and object of some source class and figuring out how to transform it into an object of another target class.  If the transformation is possible, the AdapterFactory must return an object that can be cast as an instance of the target class.  If it is not possible, the AdapterFactory must return null.  In addition, AdapterFactory classes implement a method that returns a list of all target classes supported by the factory, and their defining plugin must provide an extension to org.eclipse.core.runtime.adapters specifying what transformations can be performed.  Traditionally, objects that support being adapted implement the IAdaptable interface, but this is not strictly required.

Once an adapter has been registered that can perform a transformation, any plugin can use this with a simple call:

final TargetObjType target = (TargetObjType)Platform.getAdapterManager().getAdapter(sourceObject, TargetObjType.class);
if (target==null) { /* Adaptation failed */ }

The caller is guaranteed that getAdapter will either return null or an object that can be cast as TargetObjType.

The ModelLoadingAdapter

To create a ModelLoadingAdapter, we must implement the code to load domain models from files and register this in plugin.xml.  As an easy extension we will also let the same adapter handle selections and structured selections (allowing us to directly adapt a list selection to a domain model, if the item selected is a file).  After this is done, any plugin can easily load Xtext DSL models from files with a minimum performance hit.

For this example we’re going to assume the projects are called “org.xtext.example.mydsl” and “org.xtext.example.mydsl.ui”, the language is called “org.xtext.example.mydsl.MyDsl”,  it uses files with extension “mydsl”, and the root model of a document is a DslModel.

The IAdapterFactory

Our actual adapter must do three things:  Obtain the Injector for our DSL, use the injector to obtain a properly initialized XtextResourceSet, and use the resource set to load the file.

Obtaining the domain-specific language injector

Update: From the comments below, the easy/correct way to use injection in Eclipse RCP/plugin projects is to make use of the ExecutableExtensionFactory class. If you specify, in plugin.xml, the name of your AdapterFactory class as “ExecutableExtensionFactory:AdapterFactory” then you do not need to manually get the injector. You can just specify a private class variable using “@Inject
private XtextResourceSetProvider resourceSetProvider;”

When Xtext generates the .ui project, it creates an activator for this plugin that is responsible for creating and storing the Injector for this language.  We can retrieve the injector from this class using the getInjector method, which takes the name of the language as an argument and either returns the injector or null.  The code for this would be

final private static Injector injector = MyDslActivator.getInstance().getInjector("org.xtext.example.mydsl.MyDsl");

Loading the model

This is very similar to the FAQ’s instructions, we just omit the creation of the injector and use the one that we retrieve from the activator.

XtextResourceSet resourceSet = (XtextResourceSet) injector
	.getInstance(XtextResourceSetProvider.class)
	.get(file.getProject());
resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);
Resource resource = resourceSet.getResource(URI.createURI(file.getLocationURI().toString()),true);
DslModel model = (DslModel) resource.getContents().get(0);

Putting it all together

The only details we’re missing are relatively minor: error checking, transforming selections to files, and the format required by IAdapterFactory.  Our complete class ends up looking like this:

package org.xtext.example.mydsl.util;

import org.eclipse.core.resources.IFile;
import org.eclipse.core.runtime.IAdapterFactory;
import org.eclipse.emf.common.util.URI;
import org.eclipse.emf.ecore.resource.Resource;
import org.eclipse.jface.viewers.ISelection;
import org.eclipse.jface.viewers.IStructuredSelection;
import org.eclipse.xtext.resource.XtextResource;
import org.eclipse.xtext.resource.XtextResourceSet;
import com.google.inject.Injector;
import org.xtext.example.mydsl.myDsl.DslModel;
import org.xtext.example.mydsl.ui.internal.MyDslActivator;
/* Author: Robert Coop
* See: http://coopology.com/2011/06/easily-load-xtext-files-and-objects-in-eclipse-plugin-or-rcp-projects-using-adapters/
*/
@SuppressWarnings("rawtypes")
public class ModelLoadingAdapter implements IAdapterFactory {
	private static org.apache.log4j.Logger log = org.apache.log4j.Logger
		.getLogger(ModelLoadingAdapter.class);
	final private static Injector injector = MyDslActivator
		.getInstance().getInjector("org.xtext.example.mydsl.MyDsl");

	@Override
	public Object getAdapter(Object adaptableObject, Class adapterType) {
		if (adapterType == DslModel.class) {
			if (injector==null) {
				log.error("Could not obtain injector for MyDsl");
				return null;
			}

			if (adaptableObject instanceof ISelection) {
				final ISelection sel = (ISelection)adaptableObject;
				if (!(sel instanceof IStructuredSelection)) return null;
				final IStructuredSelection selection = (IStructuredSelection) sel;
				if (!(selection.getFirstElement() instanceof IFile))
					return null;
				adaptableObject = (IFile)selection.getFirstElement();
			}
			if (adaptableObject instanceof IFile) {
				final IFile file = (IFile)adaptableObject;
				if (!file.getFileExtension().toLowerCase().equals("mydsl")) return null;

				XtextResourceSet resourceSet = (XtextResourceSet) injector
					.getInstance(XtextResourceSetProvider.class)
					.get(file.getProject());
				resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);
				Resource resource = resourceSet.getResource(URI.createURI(file.getLocationURI().toString()),true);
				DslModel model = (DslModel) resource.getContents().get(0);
				return model;
			}
		}

		return null;
	}

	@Override
	public Class[] getAdapterList() {
		return new Class[] { DslModel.class };
	}
}

Finishing Up

To tie everything together, we just need to modify plugin.xml to tell Eclipse about the adapter, then we can use it to load models easily.  Add the following to plugin.xml

<extension point="org.eclipse.core.runtime.adapters">
      <factory adaptableType="org.eclipse.core.resources.IFile" class="org.xtext.example.mydsl.util.ModelLoadingAdapter">
         <adapter type="org.xtext.example.mydsl.myDsl.DslModel" />
      </factory>
      <factory adaptableType="org.eclipse.jface.viewers.ISelection" class="org.xtext.example.mydsl.util.ModelLoadingAdapter">
         <adapter type="org.xtext.example.mydsl.myDsl.DslModel" />
      </factory>
</extension>

Using our new adapter

Using the adapter from any plugin is very simple.  Given any IFile or ISelection object representing a mydsl file, we can obtain a DslModel using this code

DslModel target = (DslModel)Platform.getAdapterManager().getAdapter(sourceObject, DslModel.class);
if (target==null) { /* Adaptation failed */ }

Final Thoughts

This technique can be applied to any adapter with the advantage of not needing direct dependencies on the plugin providing the adapter factory.  Hopefully this will prove useful to you either because it demonstrates a good way to load Xtext files or it demonstrates some of the adapter framework uses.  Enjoy!

May
16

Howto: Mount your Android SD card under linux via wifi

This guide will get you from having an Android phone with a Samba sharing app installed (such as the one I currently use: https://market.android.com/details?id=com.funkyfresh.samba) and an Ubuntu (or any generic Linux) machine with Samba installed to being able to mount your SD card via WiFi. We will do this by: setting the computer to use netbios names for IP address resolution, testing the mount with mount.cifs, and then adding a permanent entry to /etc/fstab.

My setup: HTC Inspire 4g, rooted running Cyanomodgen 7; Samba Filesharing on Android app (from marketplace, free, version 110130-market); Ubuntu 11.04 (with samba package installed)

Prerequisites

First, you need a Samba server app for your phone (I’m using https://market.android.com/details?id=com.funkyfresh.samba, which seems to work well enough), then you need to go into your Samba app on the Android and set a netbios name for your phone (e.g. I’m using “bcoop-android”, so replace that with whatever name you set). For security reasons, you also want to set a user (e.g. “bcoop”) and a password required to access the share. If you have the option, you need to give the shared directory a name (on my app the name is fixed as “sdcard”).  You’ll also need to make sure that your computer has a samba client installed (this can be installed in Ubuntu by installing the smbclient package).

Using Netbios for IP address resolution

Secondly, we need to tell the computer to try to perform DNS lookups using netbios names if all else fails. You can tell if you need this step by running the command “ping bcoop-android” (making sure your phone is connected to the same network as the desktop via wireless and that the Samba app on the phone is running). If you receive a “unknown host” error, then the desktop is not able to look up names via netbios, which is a simple fix. Run the command

sudo gedit /etc/nsswitch.conf

and look for the line that starts with hosts:. At the end of this line, add wins. You should end up with the line looking something like:
hosts: files [...] wins

This will tell your machine to use Netbios name lookup if all else fails. You want to make sure to add wins at the end of the string of methods so that it does not check this before other methods. Save the file and close gedit.

Update: After an hour or so, my connection started timing out and I couldn’t remount the share.  I was confused about what was going on, but noticed that when I tried to ping the netbios name I suddenly got a response from a 209.XXX.XXX.XXX IP address, and not from my phone.  Long story short, it turns out that my lovely ISP (Comcast) has a policy of hijacking domain names that don’t exist so that they can redirect browsers to a search page with ads.  A side effect of this is that all DNS resolution requests are answered, regardless of whether they exist or not.  This causes the computer to assume that the request has been answered and not to look at the wins netbios name for a possible IP address.  The solution to this was to put wins just before the dns entry in the hosts line.

You should now be able to run the ping command and have the computer try to ping an IP address (it doesn’t matter if you receive a response or not, we’re just checking to see that the computer can translate the netbios name to the ip address of the phone).

Note: If you’re not able to get this to work, you can still move on, but just use the phone’s IP address instead of netbios name for the server.  It will be necessary to either continually change the IP address to the phone’s IP, tell your router to assign the phone the same IP address always, or to use some other method to ensure that the phone’s IP address remains correct.

Testing things out: Temporarily mounting the phone

To test things out, we need to create a directory to use as a mount point.  Run the command sudo mkdir /media/android to do this (using a different directory if you’d prefer). Now, we want to manually mount the phone in this directory. There are a couple different ways to do this, depending on how you want the file permissions to work.  I’ll list the different commands you can use, and you can see the below section for further discussion about which might be best.  You will need to modify this command with some specifics for your setup, see the section immediately afterwards.

To not allow any other users to access your files (the recommended method)

sudo mount -t cifs //bcoop-android/sdcard/ /media/android  -o user=bcoop,uid=bcoop,gid=bcoop,nounix,file_mode=0770,dir_mode=0770

To avoid using ‘nounix’, but allow others to read (but not write) your files

sudo mount -t cifs //bcoop-android/sdcard/ /media/android  -o user=bcoop,uid=nobody,gid=bcoop

To disable permission checking entirely (anyone can read/write your files)

sudo mount -t cifs //bcoop-android/sdcard/ /media/android  -o user=bcoop,noperm

For all methods

You’ll need to replace some parts of this command with your setup information:

  • //bcoop-android/sdcard should be your phone’s netbios name (or IP address) followed by the share name:  //NETBIOS_NAME/SHARE_NAME
  • /media/android should be your mount point directory
  • user=bcoop should be the user name that you set up on the phone for the Samba share:  user=PHONE_SAMBA_USER
  • uid=bcoop,gid=bcoop should be your computer user’s name and group (these are likely the same on a typical setup): uid=COMPUTER_USER,gid=COMPUTER_GROUP
  • uid=nobody should be the name of a fake user on your computer

After running the command, you’ll need to enter your sudo password, then your password for the phone’s samba share.  If all goes well, you should see no error messages then be able to run

ls /media/android

and see the contents of your phone. In that case, you’re ready to set the share up permanently. If you don’t mind running the mount command every time, you can just stop here.

Notes regarding file permissions

(This section can safely be skipped if you’re not interested in knowing any background about how things work behind the scene…)

When I was trying this out, the thing that took the most amount of time to figure out is the file permissions used on the phone.  When mounting a SMB (Samba) share, there are a few options when it comes to file permissions: accept the uid/gid (user and group owner id) from the phone, force the uid/gid to map to specific users/groups on the computer, or ignore the permissions reported by the phone entirely.

The most convenient option is to ignore the permissions entirely, but it is also the least secure: it would allow any program or user on your computer to have full access to the files on the phone when it is mounted.  The typical approach is to map the user and group from the phone to be equivalent to your computer user.  However, I noticed something odd about the way the permissions are reported on my setup.  I’m not positive if this is just some eccentricity of my specific setup, but the permissions reported by my phone have the user set with no read/write/execute permission, the group set with full read/write/execute permission, and everyone else set with just read/execute permission.  (For comparison, the typical setup is user with full permission, group with either full or read/execute permission, and everyone else with either read/execute permission or no permissions at all.)  So, if one maps the uid/gid to the computer user’s uid/gid then the result is that the current user will have no permissions at all.  One solution is to map the gid to the computer user’s gid, but to map the uid to some fake/unused user (I used ‘nobody’, which is a standard and safe bet).  This results in you having full access to the phone and other users being able to read but not modify the contents, and has the advantage of retaining the maximum amount of functionality (i.e. it doesn’t disable some behind-the-scenes filesystem functionality).  An alternative solution is to disable ‘CIFS Unix Extensions’ and manually set the file/directory permissions as well as the uid/gid.  This has the advantage of allowing you to explicitly remove read permission from other users if desired, but has the possible disadvantage of disabling something that is required (though I have no idea if that is likely to happen or even really possible; please leave a comment if you know something about this that I don’t).

Setting things up permanently

To permanently save these settings, we need to create a credentials file to safely hold your samba share’s username and password and we also need to add the mount information to /etc/fstab so that the system is aware of the settings.  To safely store your credentials, we want to create a file that only your user can read which holds your username and password.  To do that, run

gedit ~/.android_credentials

and add the following to this file:

username=YOUR_USERNAME
password=YOUR_PASSWORD

Save the file, close gedit, and run the command

chmod 0600 ~/.android_credentials

to make sure that only you can read that file.

Now, to save the information into fstab, run

sudo gedit /etc/fstab

and add the following line to the file:

//bcoop-android/sdcard/	/media/android	cifs	credentials=/home/bcoop/.android_credentials,uid=bcoop,gid=bcoop,nounix,file_mode=0770,dir_mode=0770,user,noauto	0	0

(Note that you may need to change the options if you’d like something other than the recommended method from above, and you’ll need to replace credentials=/home/bcoop/.android_credentials with the correct path to your credentials file. Also note that the trailing slash on //bcoop-android/sdcard/ is very important. If you forget this trailing slash then you cannot unmount the share as a regular user.)

 

If all goes well you should now be able to run

mount /media/android

and access your phone’s SD card contents in /media/android. Remember that you need to unmount this by running

umount /media/android

when done. Enjoy!

 

Apr
19

Using Log4J in Eclipse RCP (and forcing all other plugins to use it too!)

There are many resources out there that describe the process (and headache!) of using Apache’s Log4J http://logging.apache.org/log4j/ logging framework within an Eclipse rich client platform (RCP) project.  After digesting all the resources that the internet has to offer (my favorite links at the end) and adding log4j to the RCP project that I am currently working on, the most common problems seem to be:

  • Getting log4j into the environment
  • Classloader errors caused by different versions of log4j classes being loaded by different plugins
  • Problems accessing/locating the log4j.properties file

I also had a problem where I wanted to use only Log4J for logging, so I wanted to be sure that all log messages from all plugins in my RCP project were being picked up by the logging framework.

A log4j bundle is available in the Eclipse Orbit builds (http://download.eclipse.org/tools/orbit/downloads/drops/R20100519200754/); so, actually getting log4j into the RCP environment just involves downloading that bundle and requiring it in the RCP project.  However, there is still the potential for classloader problems and dependency issues when multiple plugins are developed as part of a project and each brings its own log4j package to the table.  The solution is somewhat simple:  designate one plugin project to initialize and supply the logging bundle, and have all the other plugins rely on this.  The choice of the master plugin is usually fairly simple;  for my project it is the RCP project itself, which actually initializes the application environment and has a dependency on the feature plugins.  That master plugin should have the log4j as a bundle dependency (Required Plug-ins), and the other plugins should simply list org.apache.log4j as an imported package.

To configure the logger using a .properties file, a bit of additional work is needed.  Many resources seem to imply there is a way to accomplish this that results in Log4j automatically loading the properties (because it will search for the log4j.properties file at startup if no config is provided), but I wasn’t able to get that to work and didn’t want to spend that much time one it.  My solution was to include log4j.properties in the project directory of the RCP plugin, make sure to add this file to the binary build under build configuration, and initialize the logger using this properties file when the application starts.  This initialization is done by adding the following to your Activator class in the master plugin:

import java.net.URL;
import org.apache.log4j.Logger;
import org.apache.log4j.PropertyConfigurator;
import org.eclipse.core.runtime.FileLocator;

final private static Logger log = Logger.getLogger(Activator.class);

public void start(final BundleContext context) throws Exception {
	// [...]
	// Setup logging
	URL confURL = getBundle().getEntry("log4j.properties");
	PropertyConfigurator.configure( FileLocator.toFileURL(confURL).getFile());
	log.info("Logging using log4j and configuration " + FileLocator.toFileURL(confURL).getFile());
	hookPluginLoggers(context); // You need to add this method to hook other plugins, described later...
	// [...]
}

That will load and configure the logger using the properties file. The properties file can be modified without needing a re-compile.  To actually log from any class in any plugin now, just add the following:


public class MyClassInAnyPlugin {
	private static org.apache.log4j.Logger log = org.apache.log4j.Logger
			.getLogger(MyClassInAnyPlugin.class);
	void someMethod() {
		log.debug("A debug message");
		log.warn("A warning message");
	}
}

A description of all the log methods and levels can be found in the log4j docs at the official site.  At this point, you should have logging up and running in your project, but we have yet to ensure that _all_ log messages will be sent through log4j.  In particular, other Eclipse plugins use the Eclipse logging framework; without special hooking, these messages will be missed by log4j.  To accomplish that, I used the PluginLogListener class described at http://www.ibm.com/developerworks/library/os-eclog/ in order to add a log listener to each loaded plugin.  To add a listener for each plugin, we need to add the following method (and field):

import org.eclipse.core.runtime.ILog;
import org.eclipse.core.runtime.Platform;

final private List<PluginLogListener> pluginLogHooks = new ArrayList<PluginLogListener>();

// Hook all loaded bundles into the log4j framework
private void hookPluginLoggers(final BundleContext context) {
	for (Bundle bundle : context.getBundles()){
		ILog pluginLogger = Platform.getLog(bundle);
		pluginLogHooks.add(new PluginLogListener(pluginLogger,
				Logger.getLogger(bundle.getSymbolicName())));
		log.trace("Added logging hook for bundle: " + bundle.getSymbolicName());
	}
}

Even though we don’t ever need to read the contents of the pluginLogHooks list, it’s very important to store the PluginLogListener instances we create; without persistent storage they will be reclaimed by the garbage collection process and their logging hooks will be removed.  After adding this method, when your application starts up, you should see something like this:

0    [main] INFO  com.rtsync.devs.rcp.Activator  (Activator.java:76): Logging using log4j and configuration [...]/my.project.rcp/log4j.properties
3    [main] TRACE com.rtsync.devs.rcp.Activator  (Activator.java:103): Added logging hook for bundle: org.eclipse.osgi
4    [main] TRACE com.rtsync.devs.rcp.Activator  (Activator.java:103): Added logging hook for bundle: com.ibm.icu
5    [main] TRACE com.rtsync.devs.rcp.Activator  (Activator.java:103): Added logging hook for bundle: org.apache.commons.logging
5    [main] TRACE com.rtsync.devs.rcp.Activator  (Activator.java:103): Added logging hook for bundle: org.eclipse.ant.core
6    [main] TRACE com.rtsync.devs.rcp.Activator  (Activator.java:103): Added logging hook for bundle: org.eclipse.compare.core
6    [main] TRACE com.rtsync.devs.rcp.Activator  (Activator.java:103): Added logging hook for bundle: org.eclipse.core.commands[...]

Now, any regular log message created by any plugin in your RCP project will be caught and interpreted by the log4j framework, which looks like this:

!ENTRY org.eclipse.core.resources 2 10035 2011-04-19 10:02:03.710!MESSAGE The workspace exited with unsaved changes in the previous session; refreshing workspace to recover changes.
819  [main] WARN  org.eclipse.core.resources  (PluginLogListener.java:91): org.eclipse.core.resources - 10035 - The workspace exited with unsaved changes in the previous session; refreshing workspace to recover changes.

Note that as long as the -consoleLog option is used with Eclipse (which is added by default to your run/debug configurations) you’ll see the original plugin log message and then the Log4J log message.  Now you have the entire environment using the same logging framework, whether it wants to or not!

Good in-depth explanations of logging in Eclipse RCP:

 

Mar
22

Increase your productivity in Linux by 200% or more with a single command!

I found an amazing way to increase productivity by a large amount with one simple command! Simply run this command:

sudo bash -c "echo -e \"### Sorry guys, I'll return when I have less deadlines\n127.0.0.1 reddit.com\n127.0.0.1 www.reddit.com\n127.0.0.1 slashdot.org\" >> /etc/hosts"

I’m not quite sure what it is about this, but I immediately noticed work was getting done much sooner!

A note for those not familiar with hosts and other sarcasm-deficient folks: This command makes the computer use localhost for the IP address of Reddit and slashdot, preventing you from accessing those sites.  In a pinch, this command can help people like me that suffer from a lack of self-control when it comes to online distractions…  I’ll return some day…

Mar
17

BSEC 2011 Poster

Author Robert Coop    Category publications     Tags

A poster that I presented at the 3rd annual Oak
Ridge National Lab Biomedical Science and Engineering Conference.

Functional analysis and prediction of tumor growth

Abstract:

Based on a modified logistic model, simulated growth trajectories of brain tumors are prepared. Simulation trajectories track the number of cancer cells within each 3-dimensional area (voxel) in a 128×128 grid. A novel functional analysis procedure is developed which uses a genetic algorithm combined with systems theory principles in order to determine, for a given voxel, which neighboring voxels have the greatest predictive power when compared to the neighborhood as a whole. The predictive power of these groups, or functional masks, are given a fitness measure based on their relative Shannon entropy and the frequency with which observations occur.

This functional analysis procedure is performed over the simulated trajectories; the functional mask discovered is used to perform growth prediction. The functional mask discovered is used as the basis for probabilistic parameter estimation and prediction of tumor growth. In this fashion, growth predictions are made with significant accuracy in a very general fashion; no domain knowledge about tumor growth or the model being used is incorporated in the analysis and prediction procedure.

Robert Coop is a PhD student at the University of Tennessee in the Machine Intelligence Laboratory. His research interests include machine learning, artificial intelligence algorithms, genetic algorithms, and discrete event system simulation. His work is primarily focused on machine learning methodologies as applied to abstract non-linear domains, and other applications of adaptive optimization methods to problem domains with high dimensionality. Robert Coop holds a B.S. in computer science and a M.S. in electrical engineering from the University of Tennessee and is a student member of the IEEE.

James Nutaro is a member of the Modeling and Simulation Group in the Computational Sciences and Engineering Division at Oak Ridge National Laboratory. His research interests include discrete event and hybrid systems, parallel discrete event simulation, modeling methodologies, and event based numerical methods. This work covers a broad range of application domains, with particular emphasis on control systems operating over IP-based communication links, communication and control in electric power systems, simulating of large transportation systems, and modeling and simulation of wireless communication networks. James Nutaro has authored or coauthored over 25 papers in the open literature. He holds a B.S., M.S., and Ph.D. in computer engineering from the University of Arizona and is a member of the IEEE.

%d bloggers like this: