This is a collection of ideas I have for projects. Some are in progress while others are just fig newtons of my imagination.
Please feel free to steal these ideas, but if you do, I’d really appreciate it if you let me know and credited me in some way. I’d also love to write a blog post on your implementation!
Web-based Visualization of Dropbox Space Usage
Proposed Name: “Boxcutter” or “Boxcutters”
Basically Windirstat for Dropbox, but as a web application instead of a desktop application. Sure, you can run windirstat or equivalent on your Dropbox folder on your local machine, but that assumes you’ve downloaded/synced all of your files. For users with very large Dropbox accounts, they may use the ‘selective sync’ feature to only sync certain subfolders of their Dropbox folder. A web-based space-usage visualization tool would be ideal for those kinds of users and also for users who are simply away from their main computer.
I’d like to use Node.js and the Express web framework to make the website. Node.js has a few community-made Dropbox libraries. I like the looks of “Dbox” (https://github.com/sintaxi/node-dbox). It would allow you to use the Dropbox API to recursively walk through a user’s Dropbox folder and create a tree representation of the file size metadata.
From there just create an HTML5 canvas element and draw boxes to represent files/folders. The simplest method could be to divide the canvas into boxes representing each and every file in the Dropbox account and display location and file size on mouseover. A more complex method might involve defining a threshold (perhaps determined programmatically based on the average size of the files) to determine which files to draw as boxes. Or you could draw boxes of just directories but make the sizes proportional to space usage, allowing the user to click on a box to redraw the canvas with the contents of that directory. Or just copy windirstat’s drawing algorithm since it’s open-source.
Google Video Chat (AKA Google Talk) Surveillance Robot
Proposed Name: None, but I’ve been using “kconbot” as a codename
It’d be neat to have a dummy account for Gchat/Gtalk logged in on a webcam-equipped computer in my home that’s capable of starting a video chat with my main account when it detects motion from the webcam. So if I’m in lab and I have my laptop with me, I’d be able to see any motion going on in my home.
But why stop there? Make it run on a Raspberry Pi or Beaglebone and connect some motors to create a little telepresence robot. You can control it using pre-defined chat keywords you instant message it remotely.
I wrote a blog post a while back announcing this idea and describing a little bit of progress. Contact me for the code.
Google released some information recently about how it initiates voice and video calls (https://developers.google.com/talk/call_signaling). Google chat/talk is based on XMPP (Extensible Messaging and Presence Protocol), which actually a lot of instant messenger technologies use (including Facebook Chat). But the voice and video calling happens using an extension to XMPP called Jingle. Or, like, something pretty similar to Jingle. Google is weird. Anyway, I foresee implementing this idea by using:
- xmpppy (a well-documented, open-source XMPP library for Python, can be used to write the client)
- farstream (a library for voice and video chat conferences, supports Jingle and provides Python bindings)
- gstreamer (a very sophisticated media library, farstream actually sits on top of gstreamer and uses it for the media streaming, but in order to detect motion from the webcam it may be required to use gstreamer outside of farstream)
- OpenCV (the open-source computer vision library, man I hope this isn’t needed because it’s notoriously finicky, but it’d definitely be able to detect motion from the webcam)
I don’t imagine this idea would work for any operating system besides Linux, but that’d be okay for the Raspberry Pi or Beaglebone. Huge bonus points if you make a debian package for it and get it in one of the major apt repositories.
For the robot portion of it, mount the board and webcam on a Pololu 3Pi chasis or similar robotic platform. The 3Pi bot uses an Arduino or mbed for controlling, so you could have the main board send commands over serial to the Arduino/mbed to make the robot move. Or just control the motors using the GPIO from the main board.