Archive for category Open Notebook Science

OWW content scraper

I have been highly inspired by Carl Boettiger’s open notebook and his website in general. Like him, I started my open notebook in OpenWetWare. When I first started using OWW, I thought it was the most spectacular thing in the world. It was the first time I ever used a wiki and I must admit that I liked the format a lot.

One of the main reasons why I decided to not have a privately hosted open notebook like Carl’s was that I was concerned with the information dissapearing when I stopped paying for a domain. This could happen either from me not paying or from my ultimate journey to forgetfulness or death. There is a lot of good content online that I have posted that should be retained in the very least for `shits and giggles`. OWW is not my platform and I naively thought that it would persist for all time. It’s the internet right? Nothing gets thrown away on the internet…

I was told by some patent guru that things on the internet stay forever and this is why you should never post something you think you could patent. Of course, I argued against this point and used the example of Geocities—that does not exist anymore nor does a complete data dump of Geocities exist for archival purposes. Thus, I know for a fact that things on the internet will disappear at some point in time and it is up to me to ensure that I archive the things I want to keep. This idea came crashing home when OWW went dead for some time and I was left wondering if my information and notebook entries were lost. Thankfully OWW returned, however, it left me with the impression that they would not be very stable in the future. I even tweeted OWW asking if they still existed or not. I didn’t get a response.

Problems with OWW are why I moved to a wordpress.com site. WordPress has shown itself to be a sustainable—at least for the short term that I’ve used it—site that looks like it can and will be preserved for a “long” time. Of course, there is still an issue of archiving what I have posted already to WordPress but, I will cross that bridge when I come to it. Nonetheless, WordPress hasn’t randomly gone black on me yet and I like that. It doesn’t work very well as a notebook due to its blog nature but that’s only a quibble since I find it more important to get the information out than I do in what form the information takes. Yes, a nice leather bound book with hand crafted binding is preferable to a light bulb computer screen but what can you do. Plus I absolutely love getting an email when one of my students posts something. That’s spectacular and I love reading what they write.

Fighting with wordpress to do notebook things is just as bad as when I was fighting with OWW to do non-notebook things. I’ve realized that no matter what service I use as my open notebook, that service will cause me pains. Thus, I have decided that I should have my own site where I get to maintain it and build in the things that I would like for it to do. Plus, I want to instantiate my own instance of what Carl has done in his workflow with archiving notebook entries to figshare.

In order to do this, I need to scrape all the information that I posted to OWW. OWW does have a data dump for their entire wiki that I tried to parse with only my information in it. Unfortunately, this proved to be more difficult than I thought. It also meant that I needed to figure out a method for retrieving all the images that I posted to OWW somehow. So, rather than attempt to parse a huge wiki dump, I decided to scrape the information straight from the web pages I made in OWW.

I have used Python and Beautiful Soup to scrape the content from my OWW entries. I will post and describe each step I took to scrape my notebook content in the following paragraphs.

——————

Of course, there are the ubiquitous import statements.

#   Import statements.
import urllib2
from bs4 import BeautifulSoup
import os
import urlparse
import urllib

STEP 1: Get the dates for each notebook entry. The first thing to do is to get a list of all the notebook entries that I made over the years. Thankfully there is a calendar page that shows all the entries I made by year.

Looking at the source of this page shows that there is an array of all the dates I made entries. On a side note, I did not do so well when writing in my notebook. This is something that should be remedied in the future.

The below function loads the specified notebook year and searches for the array of dates in the HTML. When it finds the array, it puts the dates into a Python array. As can be seen in the above image, the dates are written as mm/dd/yyyy. The URLs are not in this format, they are in fact written like yyyy/mm/dd. In order to get the correct URL from the array, I had to reformat the date. Those dates are then used to create the notebook entry URL.

#	Get the notebook entries for the specified year.
def get_notebook_entries(year):
	#	Read in the notebook calendar.
	link			=	'http://openwetware.org/wiki/User:Andy_Maloney/Notebook/Lab_Notebook_of_Andy_Maloney/' + str(year)
	try:
		request	 	=	urllib2.Request(link)
		response	=	urllib2.urlopen(request)
	except:
		print '\nUnable to open the requested URL.'
		break
	page		=	response.read()
	response.close()
	#	Splits each line in the HTML for easy parsing.
	page_data	=	page.split('\n')
	keep		=	[]
	#	Locates the array of dates.
	for i,line in enumerate(page_data):
		if "new CalendarPopup('y1');" in page_data[i]\
				and 'var fullDates = new Array' in page_data[i + 3]:
			keep.append(page_data[i + 3]\
				.replace('  var fullDates = new Array(', '')\
				.replace("'", '').replace(');', '').split(','))
	#	Keeps only the dates notebooke entries were made.
	keepers	 	=	keep[0]
	keepers_	=	[]
	#	Format the date. 
	for item in keepers:
		u	=	item.split('/')
		v	=	u[2] + '/' + u[0] + '/' + u[1]
		keepers_.append(v)
	#	Create an array with notebook entry URLs based on the dates scraped from the page.
	locals()[ 'notebook_pages_' + str(year) ]	=	[]
	for element in keepers_:
		u	=	'http://openwetware.org/wiki/User:Andy_Maloney/Notebook/Lab_Notebook_of_Andy_Maloney/' + element
		v	=	'/wiki/User:Andy_Maloney/Notebook/Lab_Notebook_of_Andy_Maloney/' + element
		w	=	element
		x	=	u, v, w
		locals()[ 'notebook_pages_' + str(year) ].append(x)
	return locals()[ 'notebook_pages_' + str(year) ]

Once I have a URL for all the notebook entries I made in a year, I then need to generate another function that locates all the subpages from the notebook entry. When I was using OWW I inadvertantly made it more difficult for future me to scrape this information. I did this by putting subpages in my notebook entry that was supposed to be for categorizing purposes. Oops.

This means that I need to locate the URLs on the page and scrape the information off of the subpages. The following function accomplishes this. Again, I try and open the notebook page and then I search for the URLs on that page. In order to only select the subpages that I made, the link must contain ‘/wiki/User:Andy_Maloney/Notebook/Lab_Notebook_of_Andy_Maloney/’ in it, otherwise it is rejected. The astute will note that this was baked into the notebook URL function above.

#	Find subpages in notebook entries.
def get_notebook_subpages(pages, year):
	locals()[ 'internal_notebook_links_' + str(year) ]  =	[]
	#	Open the notebook page.
	for thing in pages:
		try:
			request		=	urllib2.Request(thing[0])
			response	=	urllib2.urlopen(request)
		except:
			print '\nThe page is not responding.';
			print request
			pass
		soup			=	BeautifulSoup(response)
		temp_links		=	[]
		subpage_links	=	[]
		#	Locate all the URLs in the page and create a list of them.
		for link in soup.findAll('a'):
			temp_links.append(link.get('href'))
		for i,item in enumerate(temp_links):
			if thing[1] in str(item):
				if str(item).endswith(thing[2]) == False:
					u	=	'http://openwetware.org' + item
					subpage_links.append(u)
		locals()[ 'internal_notebook_links_' + str(year) ].append(subpage_links)
	return locals()[ 'internal_notebook_links_' + str(year) ]

Now for the real reason as to why I made my own scraper: images. I could have used an already cooked scraper from someone else but, I wouldn’t have been able to get the full size images of items I stored on OWW. In order to do that, I built another function that finds the image URLs and downloads them. I guess I should have kept all my original images and posts in a better format, something more suited for archival purposes. Oh well. Big lesson learned here.

Again, I open the page and locate all the images on it. I also filter out some of the images since I don’t need or want them. I then reformat the image URLs in order to download the high resolution images. This part wasn’t so easy to figure out unfortunately. I believe that it was difficult due to the way the image files were structured in OWW’s database. Instead of having the images under my username, they went to random folders like images/b/b4/my_image.jpg. Nonetheless, the below function will get the images from the page for me.

def get_images(link, path):
	#	Open the page.
	try:
		request			=	urllib2.Request(link)
		response		=	urllib2.urlopen(request)
	except:
		pass
	soup				=	BeautifulSoup(response)
	#	Locate all the images on the page.
	page_images			=	soup.findAll('img')
	page_image_links	=	[]
	#	Remove images that are not essential to the final output.
	for image in page_images:
		page_image_link =	urlparse.urljoin(link, image['src'])
		if 'somerights20.png' in page_image_link\
				or 'poweredby_mediawiki_88x31.png' in page_image_link\
				or '88x31_JoinOWW.png' in page_image_link\
				or 'Random.png' in page_image_link:
			continue
		else:
			page_image_links.append(page_image_link)
	high_res_links	=	[]
	image_path		=	[]
	#	Create the high resolution image links.
	for link in page_image_links:
		if 'thumb/' in link:
			high_res_link	=	'http://www.openwetware.org' + '/'.join(urlparse.urlsplit(link)\
								.path.replace('thumb/', '').split('/')[:-1])
		else:
			high_res_link	=	'http://www.openwetware.org' + '/'.join(urlparse.urlsplit(link)\
								.path.replace('thumb/', '').split('/'))	
		high_res_links.append(high_res_link)
		image_name	=	high_res_link.split('/')[-1]
		outpath	=	os.path.join(path, image_name)
		image_path.append(outpath)
		urllib.urlretrieve(high_res_link, outpath)
	return image_path

Here I call the functions I wrote above in order to scrape the URLs and save them to files on my computer.

notebook_pages_2009	=	get_notebook_entries(2009)
notebook_subpages_2009  =	get_notebook_subpages(notebook_pages_2009, 2009)
notebook_pages_2010	=	get_notebook_entries(2010)
notebook_subpages_2010  =	get_notebook_subpages(notebook_pages_2010, 2010)
notebook_pages_2011	=	get_notebook_entries(2011)
notebook_subpages_2011  =	get_notebook_subpages(notebook_pages_2011, 2011)

My next step is to save the HTML to files. I actually only take content between the ‘<!– start content –>’ and ‘<!– end content –>’ comments. This is one reason why those images I skipped over aren’t needed in the final scrape. I do this for both the  notbeook page and its assiciated subpages.

path	=	'/save/file/path'
#	Get the beginning page content for the notebook entry.
no_response =	[]
for i,item in enumerate(notebook_pages_2009):
	year		=	'2009/'
	date		=	notebook_pages_2009[i][2].replace('/', '_')
	os.makedirs(path + year + date)
	new_path	=	path + year + date + '/'
	try:
		request		=	urllib2.Request(item[0])
		response	=	urllib2.urlopen(request)
	except:
		no_response.append(item[0])
	page		=	response.read()
	response.close()
	page_data	=	page.split('\n')
	out_html	=	[]
	for j,line in enumerate(page_data):
		if '<!-- start content -->' in line:
			for sub_line in page_data[j:]:
				if '<!-- end content -->' in sub_line:
					break
				out_html.append(sub_line)
	#	Soup to eat.
	soup	=	BeautifulSoup(''.join(out_html))
	image_path  =	get_images(item[0], new_path)
	#	Change all image paths in the parsed html.
	try:
		for j,thing in enumerate(soup.findAll('img')):
			if image_path[j].split('/')[-1] in str(soup.findAll('img')[j]):
				thing['src']	=	image_path[j]
	except:
		pass
	filename	=	item[0].split('/')[-1]
	f			=	open(new_path + filename + '.html', 'w')
	#	Soup to save.
	f.write(soup.prettify('utf-16'))
	f.close()

#	Get the subpages of the notebook.
for i,item in enumerate(notebook_subpages_2009):
	year		=	'2009/'
	date		=	notebook_pages_2009[i][2].replace('/', '_')
	new_path	=	path + year + date + '/'
	for thing in item:
		new_folder  =	thing.split('/')[-1]
		os.mkdir(new_path + new_folder)
		new_folder_path =	new_path + new_folder + '/'
		try:
			request		=	urllib2.Request(thing)
			response	=	urllib2.urlopen(request)
		except:
			no_response.append(thing)
		page		=	response.read()
		response.close()
		page_data	=	page.split('\n')
		out_html	=	[]
		for k,line in enumerate(page_data):
			if '<!-- start content -->' in line:
				for sub_line in page_data[k:]:
					if '<!-- end content -->' in sub_line:
						break
					out_html.append(sub_line)
		#	Soup to eat.
		soup			=	BeautifulSoup(''.join(out_html))
		image_path		 =	get_images(thing, new_folder_path)
		#	Change all image paths in the parsed html.
		try:
			for j,thingy in enumerate(soup.findAll('img')):
				if image_path[j].split('/')[-1] in str(soup.findAll('img')[j]):
					thingy['src']	=	image_path[j]
		except:
			pass
		f			=	open(new_folder_path + new_folder + '.html', 'w')
		#	Soup to save.
		f.write(soup.prettify('utf-16'))
		f.close()

This works failry well although I have had some timeouts which caused me to have to rerun some things. Also, I completely forgot about YouTube and Google Docs content. I’ll have to figure that one out. The good news is that I now have nicely formated HTML notebook entries from this scrape and they are in date stamped folders. I also think I should get the talk pages on the wiki although I’m pretty sure I don’t have very many of them.

This was a great learning experience for me. I’m sure there are easier ways to do it and better ways to catch exceptions but, this is the way I did it pulling from my knowledge base.

8 Comments

ALP_medical devices project

I took the PVC tube that Damian and the others have been working on to get some fittings mounted to it on Monday, 10-01-2012. We ended up only having time to put a fitting in the center of the device such that the pressure gauge can mount to it. I have the gauge being checked to see if it will hold a proper vacuum and will have the end caps of the pipe fitted for direct mounting to the Asco valves we have.

Danielle, if you can be sure to help Damian have the vacuum pump working by Friday, that would be spectacular. This way we can check to see if the device will hold a vacuum or not.

In the mean time, I will order the parts necessary for proper pipe sealing, such as teflon tape and the RectorSeal T Plus 2 goop.

1 Comment

Leveling the tapper

All our efforts have made the tapper quite a robust machine. Unfortunately, there are still problems with it that Damian pointed out in his last post on July 6, 2012. The issue is that the table on which the tapper sits is not level. This causes our measurements of the tapped powders to acquire a slant that is undesirable. This issue would instantaneously be fixed with a leveled optical table—of which I will follow Bill Hooker’s advice and crowd-source the funds for it—unfortunately, we don’t have one. So, I tried the next best thing which is a large piece of aluminum that is about 0.5″ thick and 2′ x 1.5′. It was the bottom support for an old circuit board maker that was made in West Germany in the 80’s. That’s right kiddos, West Germany. I will post pictures at a later date. Nonetheless, the support had tapped screw holes that I am using as a rough way to level the aluminum. I roughly leveled the plate and now need to populate it with the tapper. The unfortunately thing is that the plate does not have tapped hole, which means we will have to think of a way to keep all the components on it without moving. I bet magnets would work but I’m not sure they would work any better than the tape we are already using. Well, here’s to experimenting!

Leave a comment

Debugging the tapper

Friction in the tapper

Jihyun noticed that the tapper had an issue with “friction” when it was tapping. Turns out that the baseplate that I used to connect the post holders together would become skewed durning taps and thus would cause misalignment of the post holders with the optical posts. This caused the tapper to not tap properly. Below is an image of the baseplate highly skewed for illustrative purposes.

To alleviate this, I have switched out the baseplates with one that does not have the capability of lateral motion—see the below figure.

This baseplate does not allow for left/right motion (with respect to the line made with the screws) and thus does not cause the tapper to become misaligned and have excess “friction”.

Stability

Another issue with the tapper is stability. Ideally, I would have everything mounted on an optical table and thus everything would be in the same plane, alignment would be a non-issue, and we wouldn’t have to deal with a lot of movement issues that are cropping up. For one thing, the setup—once it is ready for data taking—should never move when taking data. Neither should the focus of the camera change when data is being taken. These problems have occurred with some data we have taken and thus cause us to retake the data. This is a problem because it will cause inconsistencies with our data.

To alleviate this, I have taped/clamped everything down.

This should help with keeping the screen and tapper in one place. I highly recommend that they do not move while taking data. If they do, then the images taken before and after the movement will be different and will cause inconsistencies.

Another issue I saw with the setup is that sometimes the screen would not be perpendicular to the light source. This is not good as it will cause keystoning with the shadow. I’d much rather not correct for this and would prefer the screen to be perpendicular to the light source at all times.

Tilting powders when tapping

Sometimes after some taps, the powder in the cylinder will no longer be level with respect to the small optical breadboard. This is a bad thing as we are trying to measure compaction with each tap. I believe that this is caused by improper alignment of the cylinder as can be seen in the below image.

It’s faint but I made a plumb line and illuminated that with the cylinder. The shadow on the screen shows that the plumb is not straight with the cylinder and thus is not exactly vertical. This could be the cause of the powders tilting when being tapped. This is a difficult thing to fix and may need some redesign of the system.

Leave a comment

Tap density setup

As promised, here is a description of the tap density apparatus. Below is a Sketchup of the final rendition of the apparatus. It uses two optical posts and post holders from Thorlabs as the piston and guide respectively. The two posts are fixed together by a baseplate spacer. The baseplate is screwed into the post holders using the post holder’s thumb screws. I removed the inner tightening screw of the thumbscrew to allow for better ventilation of air when the column is tapped. The graduated column is held onto the post holder with a few hose clamps. The hose clamps are then connected to each other using PCB spacers.

Below is a complete bill of materials.

Personally, I wouldn’t use Fisher brand glassware as I’m typically rough with glass and Fisher products have a difficult time standing up to my abuse. Below are pictures of the setup.

The above picture shows the actual setup. The system is clamped down to the table using a C-clamp to enhance stability. Behind the cylinder in the background is a plastic square holding up a white sheet of paper. This is used to create a shadow of the cylinder as seen below. The shadow is generated by using the light source from a surplus-ed carousel slide projector which, I found out just now that you can still purchase on Amazon. Neat!

This shadow is imaged by a camera directly behind the screen.

The camera being used is a Logitech C910 camera. We are using GTK+ UVC Viewer—also known as guvcview—as the software to capture images. Unfortunately there have been problems in obtaining images and we have yet to determine a unified collection mechanism. But, guvcview is open source and is filling the bill for the moment.

I have been sorely wanting an optical table to conduct these experiments on. Especially since there is no means for us to keep things in one place other than to tape things down and there is no way for us to adjust things easily when we need to. This is just one of those things. If anyone out there wants to donate an optical table to me, please feel free to contact me through WordPress.

Taps are done with a spacer that is 0.5″ in height. The cylinder is placed on the spacer and then the spacer is removed, and…the harsh mistress known as gravity does her magic.

We have two separate scripts that analyze the data. One that Damian wrote in Python, and one that I wrote in Octave. I may incorporate the two so that we have a gui that can be used cross platform for future endeavors in the Smyth Lab. I’ll have to talk to Damian about that.

3 Comments

Measuring tap densities

We have devised an apparatus that allows visualization of powder compactions for individual taps. The device has been in a state of constant flux as we try to iron out the bugs in the system and I will update my notebook with a description of the build soon. Up till now, most of our time has been building the device and making sure it works properly. Recently we were able to get data and both Damian and I have been working on building a script that programmatically calculates the distance powder compacts with each tap. While Damian and I are working on the same thing, his approach is very different and I must admit, I’m intrigued by his logic. Hopefully our scripts will output the same thing because if they don’t, we will have to debug things.

Below is a script that I have written in Octave. It should function fine in MatLab as well.


% Generate a list containing the names of the files to be analyzed.
fileDirectory = 'PATH/TO/DIRECTORY/';
filePattern = fullfile(fileDirectory, '*.png');
dataFiles = dir(filePattern);
baseFileName = {dataFiles.name}';
fullFileName = strcat(fileDirectory, '/', baseFileName);

plotDirectory = 'PATH/TO/PUT/PLOTS';
pixels = 56;
cutoffValue = 14000;
lactose = 'pharmatose125M';
sizeRange = '--63-150um';
trial = '--trial01';

saveName = strcat(lactose, sizeRange, trial);
plotTitle = saveName;

% Open the image file and obtain the roi.
for j = 1:numel(fullFileName)
 x = [ 0:1:250 ];
 image = imread(sprintf('%s', fullFileName{j}));
 roi = image(130:255, 225:475);
 [ m n ] = size(roi);
 for i = 1:n
 pixelSum(i) = sum(roi(:,i));
 endfor
 cutoff = [ x; pixelSum ]';
 ind = find(cutoff(:,2) >= cutoffValue);
 tap(j) = ind(1)*(1.27/pixels);
 tapY(j) = cutoff(ind(1), 2);
 X = x*(1.27/pixels);
 figure(j);
 subplot(2,1,1);
 imshow(image);
 subplot(2,1,2);
 hold on;
 plot(X, pixelSum, 'LineWidth', 1.2, 'b');
 plot(tap(j), tapY(j), 'ks', 'markersize', 10)
 xlabel('Distance (cm)');
 ylabel('Intensity (arb. units)');
 axis([0, 6, 8000, 26000])
 title(strcat(plotTitle, sprintf('--%04d.png', j)));
 legend('Pixel intensity values.', 'Point where the end of the shadow is defined.');
 set (gca, "xminortick", "on", "yminortick", "off")
 print(strcat(plotDirectory, strcat(saveName, sprintf('--%04d.png', j))))
 hold off;
 delete(j);
endfor

figure();
plot(X(1:201), tap, 'b.');
title(strcat(lactose, '--', sizeRange, '--', trial, '--Tapped density'));
xlabel('Number of taps')
ylabel('Distance (cm)')

The script will open .png files in the specified directory. The raw data looks like the figure below.

There is a ruler—top vertical bar—that is used to determine pixel sizes in the analysis. The ruler let us know that for (this set of images) 56 pixels equaled 1.27cm. Obviously the image is on its side and gravity is pointing to the left. This statement will become obvious in a later post. Nonetheless, these are what the raw image files look like.

Processing the images entails selecting a region of interest (ROI), summing all the pixel values (column-wise) in the ROI, and selecting the pixel value along the horizontal direction in the raw image where the summed pixel intensity is above some threshold. I have made the script output each plot which, increases the computation cost but I think is essential to ensuring the final output graph is okay or not. The intermediate processed graph looks like the image below.

Where I have ensured that the original image is attached to the pixel intensity graph for clarity. Just for fun, and to see if WordPress would allow me to do it, I have made an animated gif that shows the complete run of taps we performed. In order to visualize it, you need to click on the image and it will run in a new browser window—tested on Chrome.

Ultimately we are interested in compaction versus taps and the above script does give that output. There is still some massaging to be done with the script and the outputs of the graphs. I think I will want to do some analysis in qtiplot as well so the output will need to save the data appropriately. Nonetheless, here is the final graph that I was initially interested in producing.

Thanks again everyone for their help, suggestions, data taking and time working on this.

open notebook science logo

Leave a comment

Tap densities

First, I’d like to thank all the students for transitioning to WordPress. From what I can tell, OpenWetWare is not being developed any more and from my standpoint, should not be used. I will have to figure out a way to move notebook entries written there to WordPress for both me and Sarah.

Tumblr is a great blog type service and there is nothing wrong with it. I just felt that we should all be on the same page when it comes to notebook. Plus, WordPress will email when a notebook entry has been made. This is ideal for me and anyone that is working on multiple projects and wants to stay connected with the people updating things.

Thanks again everyone!

5 Comments

Monday

Roughness

Today I read more about roughness and started a LaTeX document with my re-derivation of Hamaker’s constant. I also got a nice paper discussing theoretical models of adhesion using different roughness models.

I am still working on the complete derivation and once I am finished, I will post it in my notebook.

Plumes

I spoke with Oli today about some high speed imaging techniques and experiments to do. I am engineering a chamber for the inhalers now.

Sputter coater

I still have not gotten the sputter coater working. I am going to try vacuum greasing the chamber a lot and see if that does anything.

nanoTherics

I was able to contact an engineer at nanoTherics today. Hopefully we can figure out what is wrong with our device. So far, my interactions with them indicate that they are an awesome company and anyone reading this should buy stuff from them.

Leave a comment

Friday

Sputter coater

Damian and I tried even more things. Including removing and cleaning all connections to the chamber. Nothing worked. We even globed on a bunch of vacuum grease every where to see if there were inadvertent leaks. Again, nothing worked.

Accelerometer power supply

Sarah and I completed the power supply modifications today. This is good news as we can now start looking at the accelerometer for the pendulum

Non ferrous stirrer.

Tommy needed a stirrer for magnetic nanoparticles. So, I ripped apart a hand drill that functioned off of a 6V battery pack. Unfortunately, the battery pack was dead and the charger didn’t work. So, I swapped out the battery pack for a 6V wall wart with not a lot of current. This is fine as the stirring doesn’t need to be fast. By removing the battery pack, the ability to change the speed went away. I’m guessing that there is some way to reinstate this. In the mean time, sorry Hugh for ripping apart your old hand drill. I will fix it when it is no longer needed. The stir bar is a glass pipet that was cleaved and the paddle is a microscope slide that is epoxied to the pipet. No metal!

open notebook science logo

Leave a comment

Thursday

Sputter coater

The sputter coater has 2 valves that control the exhaust (for when you want the chamber to return to atmospheric pressure) and the other for filling the chamber with the gas of your choice. In order to determine if the release valve is the culprit for our leak, I did the following things.

  • Removed the release valve and re-Teflon taped the filter and hose connected to it.
  • Completely bypassed the release valve and rigged together a manual release valve.

Neither of these things worked. Rather than try to fuss with the valves, I am going to remove all connections to the chamber and try to find a way to test pressure inline with the vacuum pump. Perhaps the actual chamber is the problem.

  • Plug all connections to the chamber and test.

If the chamber isn’t the problem, then it may be the pressure gauge. That, I can’t fix and will be an absolute bear if we need to replace it since the machine is out of production.

Roughness

Continuing with my other interests in roughness, I obtained the original Hamaker publication which talks about the aptly “Hamaker coefficient” that I have been reading about. I plan to rederive the coefficient for two spherical bodies as it should be interesting.

I have been able to read Hamaker’s original article

Hamaker HC (1937) The London-van der Waals attraction between spherical particles. Physica 4:1058–1072. http://dx.doi.org/10.1016/S0031-8914(37)80203-7.

and I have worked through his calculations. I am getting a sign error for some reason and will have to investigate that later. In the very least, it feels good to do some math and flex my integrating muscles. I’ll post my calculations in a later post.

open notebook science logo

Leave a comment