To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.roboticsOpen lugnet.robotics in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 Robotics / 12093
12092  |  12094
Subject: 
Re: Autonomous Robot
Newsgroups: 
lugnet.robotics
Date: 
Fri, 11 Aug 2000 22:18:28 GMT
Viewed: 
929 times
  
in article 399399F2.ADECFC6D@airmail.net, Steve Baker at
lego-robotics@crynwr.com wrote on 8/10/00 11:15 PM:

But if the robot isn't accurately pointed at the laser, it won't be able to
do that calculation since the ANGULAR separation of the detectors (subtended
at the laser) won't be known.

You can handle this two ways.

1) omnidirectional laser sensor with cylindrical cross-section, which always
has the same width no matter the heading of the robot

Yep - that would work - but we don't have an omni-directional sensor.  I guess
a
conical diffuser with a regular lego sensor pointing straight upwards would
work OK though.

That's the sort of thing I've been imagining.  JP Brown's Laser Target
invention

http://www.legomindstorms.com/members/gallery.asp?userid=36101#LaserTarget

could be generalized up to something involving a paper cylinder.  A paper
cone has variable width, so it wouldn't work with my scheme.  I think it was
you who earlier suggested a frosted cube of glass or Lucite atop a light
sensor.  Make it a cylinder instead!  And thinking about this, I see another
engineering tradeoff: enlarging the sensor to increase angular resolution
diffuses the laser beam inside the sensor, making it more difficult to
detect.


2) have the process of acquiring the laser beam end up with the robot
accurately pointing at the laser (or perhaps accurately pointing 90 degrees
away from the beam because the sensor is on the side of the robot; this idea
from an email from John Barnes)

Combining this with the idea of an omnidirectional sensor, we get a
side-mounted half-cylinder sensor, sort of like the pyro sensors that
activate motion-sensor lights.


Yes - I wanted to avoid having to continually distract the robot from it's
task in order to find out where it is.  All this rotating to point towards
rotating lasers sounds REALLY time-consuming.  I want my robots to get on
with doing a real task - and not spend minutes at a time figuring out
where they are - and then using odometry for about 30 seconds before they
have to do it all over again!

I think odometry is probably better than that.  Again, no experience yet.

This might be a good time to mention the commercial Cye robot, which
navigates entirely with odometry.  It costs about $1000.

http://www.personalrobots.com

Cye uses stepper motors to turn special spiky wheels that work well on
carpet.  It has two wheels, and is much wider than it is deep.  The only
sensors on board are a tachometer and a stall current sensor on each motor.
It uses the current sensors to detect when Cye has collided with something.
(The latest version also includes a sound sensor, for controlling Cye by
clapping your hands.)

It communicates to a base station via radio.  The base station is connected
to a PC, where all the intelligence for Cye resides.  Windows-only, phooey.

The PC contains a map of the rooms that Cye can travel through.  The
software lets you lay out the walls and furniture manually, or you can drag
Cye around the floor with your mouse and have it detect stuff by running
into things.

Cye keeps track of its position entirely with odometry.  It has a few clever
ways to reduce and/or correct odometry errors:

1) The base station (which also recharges Cye) is an absolute reference
point.  When it goes back there it knows exactly where it is.  I couldn't
find info on how this works - I assume a wide ramp that funnels Cye down
into a precise position.

2) The various areas of the floor can be assigned a bias value in the
software.  For example, heavy carpet slows Cye down by 3% or so, and
hardwood floors speed him up by a bit.  You can tune these parameters to
improve accuracy.

3) You can define "checkpoints" in the software, which appear to be places
where Cye can bump against a wall to get rid of error in a particular
dimension.

4) As Cye travels, the software keeps track of how much error has
accumulated.  (The picture of Cye on the screen gets a fuzzy border that
reaches further out with the increasing error.)  When it passes a certain
threshold, Cye goes to a checkpoint or the base station as soon as it can.

We can steal all of these ideas.  The base station (minus recharger and
radio), the floor bias, the checkpoints, and keeping track of accumulated
error.


The ACCURACY of the SPEED of the laser becomes an issue.  Since motors vary,
the actual speed of rotation is unknown unless you measure it continually
and pass that information to the robot.  Since the precision of measurement
of angle is the thing you are trying to get rid of, this is a self-defeating
mechanism.

In a sense, you are correct.  But measuring the angle of a single particular
pointing operation, as in your tower idea, seems to me to be more prone to
error than measuring the speed of rotation of a turntable where all you
really need is the average speed.

I suppose so - but is the average speed enough?  Are lego gears smooth enough
and motors uniform enough to get you good angular rates?  Maybe.

Indeed, "maybe".  I have high hopes.  Running a bunch of tests with
datalogging should tell us one way or the other.


Of course you could make several scans
from top to bottom and from bottom to top and average your results, like the
trick the Mars Pathfinder boffins used to increase the resolution of the
camera on the base station.

Yes.

But my main concern is the issue of the orientation of the robot.

If the tower were the active component though (with all the brains), it
could measure the position of two or more retro-reflectors on the robot
and deduce the orientation from that.  That might not be terribly accurate
though.  I favor doing *only* position measurement - using consecutive
readings to deduce the direction the robot is moving - and therefore
pointing.  If you drive in straight-ish lines for reasonable distances,
you can get this information to an arbitarily good precision.  Use
odometry to cope with going around corners and to fill-in when you don't
read IR from the tower or if it loses the robot and has to re-aquire it.

Sure, that would work, at the expense of having to move the robot to
determine its heading.

Yes - but I wasn't planning on just running this system every few minutes,
I would expect to give the robot it's new position every second or two.
Hence, when the robot isn't moving, it's heading isn't changing so we
really don't need an update from the last known heading.  Whenever the
robot happens to be moving in a straight line, it'll get a heading value
and use odometry when it isn't.  You could also do planned straight line
motions to get the heading updated if you really need to know - but I
guess this all starts to depend a lot on the robot's mission.

So you constantly send position updates to the robot.  That should work
great.


If you are working in confined spaces and doing a lot of rotation, then
my scheme for getting heading from motion won't work.  On the other hand
if this is a "Go get me a beer out of the fridge" robot, I would expect:

a) That there would be some nice long straightline motion.
b) In navigation, you are only really concerned with heading when
you need to get somewhere.  Since this scheme delivers absolute
positions continually, and headings during forward motion, that
may well be good enough.
c) Robots that are doing lots of little rotations in confined areas
will be the ones that are least able to do complex gyrations in
order to intercept rotating ground-level lasers.

I'm expecting that odometry will be able to deal with maintaining heading
information in complex operations.  The robot would check its heading before
and after doing a lot of turns.


Well, this could all be B.S - but I think a lot depends on what your
robot is *for*.

You'd have to worry about running into something
because you're not quite sure of your heading, though.  Not likely to be a
big problem, since odometry is unlikely to be off by 180 degrees between
calibrations.

And short runs will give you approximate headings - long runs, accurate ones.
So if you are hopelessly unaware of your heading, even a few inches of run
will get you a heading accurate to within perhaps 45 degrees.  If you have
any kind of approximate cognitive map of your area, you should be able to
use that to predict a good direction to drive a foot or two...which gets your
heading down to (say) 5 degrees - which is enough to get you a really long
accurate measuring route.

However, I maintain that for many tasks, simply knowing where you are in
absolute terms means that accurate heading information is only useful
for bringing tools to bear on tasks (like opening the fridge and grabbing
the CORRECT beer).  I doubt that all these spinning techniques are very
good for that because once you have gotten a really accurate heading, you
then have to trash it again by doing a big rotation to point in the
direction you actually wanted to go.

That's certainly a valid concern.  Some robots use spinning sensors to
overcome this problem, but that costs us a motor.  I'm hoping that odometry
will be good enough to keep track of bearing, esp. immediately after a
calibration check.


Well - like I said - it depends on the application I think.

If you are going to go for a tower-based system - and admit to non-Lego
laser pointers - then doing all the work in the tower is just monumentally
easier *and* faster than all these spinning robots, spinning lasers, wide
and narrow beams, etc, etc.

The scheme I'm imagining would consume 2 scanning motors, two angle
sensors, one photo-detector and a $5 laser pointer (ALL in the tower)
plus *NOTHING* in the robot other than a retro-reflector mounted on
it's roof somewhere.  It'll be fast - you could probably update the
robot with it's position once a second - and best of all, the robot
doesn't have to stop doing it's work to spin in place in order to
know where it is.  The robot can use all three RCX inputs for doing
it's real job of work - and I can have multiple robots if I want.

I missed how you can get by with only one laser.  Ian Warfield's original
idea used a ground-level laser to find out the robot's bearing from the
tower, so that the tower-top laser knows that when it scans down it will hit
the robot.  How will you ensure that when the tower-top laser does its sweep
that it will hit the robot?  Do you just scan the whole room all the time,
like the laser terrain scanner on the Dante II robot?  If so, it seems that
it would take more time than one second to complete one scan.

Yes - exactly.  I'd do an initial cartesian grid scan of the entire room
looking
for the robot.  That'll take a LONG time...especially if the retroreflector on
the robot is small - so I have to scan a very fine grid.

If you always start with the robot in a known position (the base station
idea) you can fix that problem.  Or, you could have the scanner station
always start by scanning the base station, and then proceed to the rest of
the room.


However, once I know where the robot is, and I know it can only move (say)
10cm
per second - then in my next scan (one second later) - I only scan the region
within a 10cm radius of the last known position - and I'm guaranteed to find
the robot again. (Unless my dog grabbed it and ran off with it at 3 meters per
second - which is actually quite likely).

You could configure "hot spots" where the robot is likely to be i.e. in
front of the fridge, beside the couch, next to the dog's bed.  You'd have
the hot spots (waypoints) anyway, as places where the robot performs tasks.


If I don't find it in that first scan then I gradually widen my search area
by 10cm in each consecutive scan. If I don't find it after (say) five
scans, I go off and scan the entire room again.  Meanwhile, the robot is
patiently waiting for a position update - and if it hears the tower say
"I can't find you" - or if it doesn't hear from the tower at all, it can
stop to give the tower a better chance...or some other strategy - like
using approximate odometry to back up to the last place the tower said
it was at.  That would allow you to navigate around chairs and tables
where the laser can't see you.

This kind of 'adaptive search' is not a new idea.  In days gone by
before the invention of the mouse, computers used 'light pens' and
'vector scan' CRT's (no raster graphics back then).  Lightpens were
located on the screen by those kinds of strategies.

Finally, you could refine your search still further if the robot would
tell you what it was doing.  ("I'm turning left",  "I'm reversing at
5cm/sec")...in the end, the robot could send odometry information to
the laser - whose job becomes one of fine alignment only.  ("The robot
thinks it's 'here' - let's go see...nope it's 3cm to the right of 'here'")

Once you start thinking about bidirectional communications between robot
and tower, you can use software 'smarts' to replace a TON of mechanical
complexity...but then I've been programming since 1972 and playing with
Lego technics since July 17th 2000...so you see where my inclinations lie!

I am exactly the other way around... long-time Lego user, neophyte
programmer.

I like the idea of having the robot constantly talking to the tower,
although with more than one robot you'd need a communications protocol.  I
understand that a few people have invented these - ask around.


Although once the robot is acquired, you can stop scanning the other parts
of the room, I guess.

Yes - exactly.

I'd also like to extend the idea to multiple robots.  The tower could know
the approximate positions of several robots - and just check up on each one
in turn.  Those robots that are in most need of odometric-correction could
plead for higher priority in the queue of places to search.  We'd have to
be VERY careful when two robots were close to each other though - if the
tower ever got them mixed up, things would get QUITE confusing.  :-)

Now, I just need to figure out how to make a corner reflector instead
of using a bicycle reflector so that I don't get spurious laser light
flashing around the room - potentially damaging people's eyesight.

The spurious reflections may be due to the fact that bicycle reflectors
aren't mirror finished on the back, and the little corner reflectors
probably also act as prisms, bending the light rays around and shining them
back into other prisms unpredictably.

Yes.  Also, the reflector has quite a lot of plastic around the edges
that reflects - but not as we'd hope. Possibly, masking out the edges
would help too.

I think using a bunch of mirrors to build a corner reflector by hand
would be better.  I'd also like to *consider* giving the robot the
ability to change from a large reflector that the tower can find
quickly - but without much precision - and a smaller reflector that
it can deploy when the tower knows roughly where the robot is and
needs a small target in order to make a high-precision 'fix'.  In the
'beer fetching' example, the robot needs a good fix at the start of it's
trip - but may only need very rough feedback while on the LONG trip across
the kitchen floor.  Once it thinks it's getting close and has to latch
onto the door frame to open the fridge, it'll want REALLY good data and
won't care so much about how long the tower takes to scan that small
area.

Naah - don't bother.  Once you reach the fridge, switch over to sensors on
the robot to find and open the door.  (Wedge a cam into the crack between
door and fridge?) Or, as I'm contemplating, have another robot IN THE FRIDGE
open the door and hand the appropriate beverage to the mobile robot.  Stick
the RCX to the outside of the fridge, on the side, so it can talk to the
mobile robot with IR.

Sample conversation between mobile robot, fridge robot, and tower robot:

<Steve presses message button 1 on his remote control>

MR: Tower, Steve wants his favorite beer.  I'm going to the fridge.
TR: Roger. <starts looking for MR along track to kitchen>
MR: <drives a while> Am I there yet?  I think I'm in the doorway now.
TR: Yup, you're there.  You're on your own now.  Good luck.
MR: <drives up to fridge> Hi
FR: Hello, what'll you have?
MR: Steve want a Boddington's.
FR: Fresh out. Have a Pepsi instead.
MR: No, Steve wants a beer.
FR: How about a Tequiza?
MR: I'll take one.
FR: <opens fridge, extends arm containing Tequiza> OK, here.
MR: <grabs Tequiza> Got it, thanks.
FR: Tell Steve I'm out of Boddington's.
MR: Righto. <drives into living room> Hello, I'm back.
TR: <checks last known position> Welcome back.  You're in the kitchen
doorway.
MR: FR says there's no more Boddington's.
TR: OK, I'll tell Steve.  <Points laser at sign on wall and beeps for
attention, sign says "Put Boddington's on your HomeGrocer shopping list">

<Steve waves his laser pointer across TR's laser detector three times, and
TR goes back to work.>


You might be able to fix this by cracking open a bicycle reflector and
spray-painting the back.  Ideally, you'd want to mirror-finish the back, but
I haven't got a clue how you would do that.  Can you buy paint that you can
put on glass to turn it into a mirror?  I don't think so, but it would be
handy if such a product existed.

Yes - but getting three 1" planar mirrors at right angles seems easier.

If the tower can track a robot reasonably well, the laser will always
go down from the ceiling somewhere and shine within an inch or two of
the robot - then reflected back into the ceiling fitting.  It would
be hard to get that shone in your face unexpectedly.

Ceiling fitting?  Where did the ceiling enter into your scheme?

Well, mostly, to minimise errors:

* The angle down to the robot needs to be steep because of the
nature of the 'tan()' function.
* I know that corner reflectors only work over 45 degrees and you have
to glue many of them together to increase the spread.  If the laser
is in the center of the room on the ceiling, it can see pretty much
all of the room at 45 degrees or better.
* There are less obstacles when looking DOWN on the robot then
across a crowded room.

Except for things like tables, chairs, couches, etc.  Depends on your room.
You'd probably program "no-go" zones where the robot can't go, or rearrange
things to suit your little Lego friend.

* The 'grid' that the laser paints to track the robot will have
better linearity if the laser is in the center of the grid.
* If the robot is going up and down small obstacles, the error in
positioning due to that would be much reduced with a 'look down'
sensor that is seeing a plan view of the action.

Going up and down would cause problems with horizontal scanning schemes, as
well as probably screwing up odometry.  I'd try to keep the Lego pieces
picked up if I were you.  :)

* The floor is more uniform in colour than the walls. Better for
light sensor calibration.
* There is no chance of firing the laser at a window or some other
reflective surface that would result in a retro-reflection from
a robot inside the room that would *APPEAR* to be sitting outside
the window as far as the scanner was concerned!
* I can do laser light shows onto the floor in the middle of the room!

Cool!  You can also use that laser to point to various signs on the wall,
for communication and/or humorous purposes.

* I'd want the laser to be pointing DOWN - rather than HORIZONTALLY
so it won't shine in my eyes by mistake.

Don't look up!


You could get the effect of a really tall tower by fastening your laser
scanner to the ceiling.

Yes.

It would also be cool to mount them in several rooms so that they can
hand-off control of the robots as they go from one room to the next.

This is starting to sound expensive.


If you don't want to fasten Lego stuff to the ceiling (look out below!), you
might try mounting it on the wall with picture-hanging technology.  More
likely to shine into people's eyeballs this way, though.

Exactly - I think that's a major concern. Hopefully, everyone is admiring the
clever robots down on the floor!

And the light show, too.


Some suggestions for safety signs follow.  :)

Sign on door of dining room, next to goggles on wall hook:

"WARNING: All diners must wear eye protection!"

Sign on the base station (wherever it is): "WARNING: Do not look directly
into laser with remaining eye"

<giggle>

My two rotation sensors are on order :-)

Good luck.  We won't know how well these ideas will work until we start
building them.

Indeed.  Right now, I have a LOT of OpenSource software that I'm
committed to doing - so I don't have as much time as I'd like to
just *play*.

Thanks for the time you've already spent playing around with ideas with the
rest of us on the list.  I appreciate it!

--
Doug Weathers, http://www.rdrop.com/~dougw
Portland, Oregon, USA
Don't spam me - I know how to use http://www.spamcop.net
"On a clear disk you can seek forever"



Message is in Reply To:
  Re: Autonomous Robot
 
(...) a (...) the (...) reads (...) Sorry, Mario, I still didn't figure out your proposal. Could you explain that better? Did you (or anybody else?) already did that? Maybe if I can understand better looking at the code, if available... (...) that. (...) (24 years ago, 6-Aug-00, to lugnet.robotics)

37 Messages in This Thread:

















Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR