Equations are not being displayed properly on some articles. We hope to have this fixed soon. Our apologies.

de Camp, N. (2013). New methods  for extracellular brain recordings in stationary and freely walking honeybees during decision making and virtual navigation. PHILICA.COM Article number 373.

ISSN 1751-3030  
Log in  
  1313 Articles and Observations available | Content last updated 21 January, 14:25  
Philica entries accessed 3 574 723 times  

NEWS: The SOAP Project, in collaboration with CERN, are conducting a survey on open-access publishing. Please take a moment to give them your views

Submit an Article or Observation

We aim to suit all browsers, but recommend Firefox particularly:

New methods for extracellular brain recordings in stationary and freely walking honeybees during decision making and virtual navigation

Nora Vanessa de Campunconfirmed user (behavioural physiology, Humboldt-University, Berlin)

Published in neuro.philica.com

Are studies of complex behavioural actions and electrophysiology in small insects, like honeybees, mutually exclusive? In a first approach to study behavioural decision making in insects, a method has been developed to record extracellularly in a virtual environment that simulates a simplified 2D world for honeybees, walking stationary on an air-supported spherical treadmill that controls the environment. The rotatory and translatory (forward) movements of the treadmill are translated in respective changes of the visual patterns. Honeybees respond to these patterns and show different walking trajectories in the virtual environment. During long lasting extracellular recordings the neural activity precedes turning manoeuvres and lasts longer than the motor activity indicating choice dependent neural activity. In a second approach tetrodes were chronically implanted in freely walking honeybees. This method is suitable to study extracellular brain electrophysiology in a Y-maze decision situation.

Article body


The search for neural correlates of behavioural decision making during navigation requires the combination of two procedures that are usually very difficult to combine, stable recording from neurons and free movement of the animal in a rather natural environment. Two approaches have been followed in the past, monitoring neural activity of animals (usually rats) navigating in a small space, and animals (or humans) navigating in a virtual reality environment (VE). The latter has the advantage that the simulated environment can be large and fully manipulated, but the disadvantages related to compromised sensory feedback provided by the moving visual world and the stationary conditions of the animal. Nevertheless animals and humans can navigate in a virtual reality set-up that produces the relevant visual feedback to the intended movements in 2D (design guidelines for human navigation in VE (Vinson, 1999) can be found at: http://cogprints.org/3297/). Multiple devices of this kind have been developed for human subjects exploring behavioural navigation, biofeedback and psychotherapy, and were successfully combined with neural recordings both of EEG and local field potentials (de Araujo et al., 2002, Lee et al., 2007, Kober et al., 2012). Rats and humans are able to navigate in visual virtual environments (Gillner and Mallot, 1998, Höllscher et al., 2005). Neural recordings from the hippocampus are possible during functional neural imaging (Dombeck et al., 2010) and even from single neurons intracellularly (Harvey et al., 2009). Drosophila flying in a simple virtual environment has helped to elucidate a large range of visual performances and visual learning at multiple levels of analysis since more than 30 years (Wolf and Heisenberg, 1991, Peng et al., 2007). However, combining flight behaviour in a virtual environment with neural recordings has turned out to be rather difficult in Drosophila leading to some correlations between turning behaviour and local field potentials (Van Swinderen and Greenspan, 2003). Intra- and extracellular recordings during olfactory learning in restrained honeybees helped to understand multiple facets of sensory encoding and neural correlates of memory formation (Menzel and Giurfa, 2006, Denker et al., 2010, Strube-Bloss et al., 2011) but instrumental forms of learning related to visual navigation were not possible so far in honeybees. An air-supported spherical treadmill was used that allows the stationary walking honeybee to control the visual environment while long lasting extracellular multi unit recordings were performed from mushroom body extrinsic neurons. The aim was to find test conditions under which the animal chose between alternative visual structures in order to search for neural correlates of this choice behaviour. The recorded neurons are known to change their properties during olfactory learning sometimes after a consolidation phase of several hours (Okada et al., 2007, Strube-Bloss et al., 2011). These neurons receive input from the intrinsic neurons of the mushroom body. They are sensitive to combinations of multiple sensory modalities including visual stimuli (e.g. the PE1 neuron, Mauelshagen, 1993, Rybak and Menzel, 1998). Since the visual system of honeybees (and other insects) differs in many aspects from that of humans, it is a rather difficult task to create an immersive VE for a honeybee. For example bees are able to detect UV light and polarized light (von Frisch, 1914, Lindauer, 1959, Menzel, 1975). Therefore it is important to compare the behavioural and electrophysiological studies of decision making in a virtual environment with comparable measurements in a real setting, for example a Y-maze.

2.Material and Methods

2.1.Virtual Environment Setup

Spherical treadmill, geometry of the virtual environment and overall setup are described in Höllscher et al. (2005), apart from the following changes: The treadmill consisted of a Styrofoam ball (10 cm diameter) placed in a half-spherical plastic cup with several, symmetric located holes through which a laminar air flow passed and let the ball float on air. Laminarity of the air flow was supported by a long (12 m) tube from the well regulated fan to the ball. Low static electrification of the ball and humidity for the animal was achieved by blowing the air into a box with water. Buoys prohibited corrugation inside the box. The beamer Epson EMP-TW 700 (digital scanning frequency: pixel clock: 13,5-81 MHz, horizontal sweep: 15-60 kHz, vertical sweep: 50-85 Hz, Fourier analysis: main frequency range 210Hz, less than 10% 100Hz and 60Hz, respectively ) was positioned above the Faraday cage and illuminated the inner surface of a cone shaped screen (height 60cm, bottom diameter 7cm, top diameter 75cm) via a large surface mirror and a Perspex window (fig.1). The inner surface of the cone consisted of thick white paper. Patterns projected onto this screen were distorted such that they appeared undistorted when seen by the bee (BeeWorld, programmed by Sören Hantke). During an experiment, the Faraday cage was closed. A webcam (Logitech, Morges Gesellschaft) positioned above imaged the head of the animal via a 500 mirror objective allowing observation of the animal during the experiment.


Figure 1 The virtual environment setup. Blue arrows indicate corresponding parts in the scheme and the photo. The virtual scene is projected via a beamer onto a mirror. The light passes through a Perspex window on top of the Faraday cage onto a cone shaped screen inside the cage. The screen can be lifted in order to place the animal on top of the treadmill (air supported styrofoam sphere).


2.2.Control of the virtual environment and experimental procedure

The virtual environment and the recognition of the bees movement was under the control of the customs written program BeeWorld (Sören Hantke). It was implemented in Java by using OpenGL-Bindings for Java (LWJGL). The movement of the ball, initiated by the walking bee, was detected by two optical computer mice as they are used for computer games (Imperator, Razer Europe GmbH; G500, Logitech Europe S.A.). The mice were accurately positioned under 90° at the equator of the Styrofoam ball and precisely aligned with x/y micro drives. The bee was able to control the virtual scenery by rotatory movements of the ball. Multiple scenarios were programmed. They were realized as xml-files. They contain the positions, width and color (RGB) of a variable number of vertically oriented stripes. These stripes were positioned around the bee. Scenario7 used in the experiments described below provided only rotatory feedback for the animal, if not mentioned otherwise. Other scenes (constructed with Blender software) enabled the bee to control the 2D environment with multiple objects. Scenario7 consisted of alternating black and white, vertically oriented stripes with an angular width of 20° and an grey interspace of 40°. In the figures presented here, the redundant pattern is standardized to one black and one white stripe in an angle of 120°. Scenario12 consisted of one black, one white and four blue stripes with the same dimensions as in scenario7. A virtually navigating honeybee drone in szenario7 can be seen in the following video: https://www.researchgate.net/publication/235974934_drone_virtual_environment?ev=prf_pub .The field of view of a camera in OpenGL is limited to 179°, the scenarios projected onto the screen, however, simulated a 360° world. In the BeeWorld program a four texture renderer (texturerenderer) was used to create a 360° camera. The rotation speed of the skyline and the checkerboard was set relative to the rotation speed of the stripes in the xml-scenario files. Data of walking traces were synchronized with the data from spike recordings which were collected with an analogue to digital converter (micro3, CED, Cambridge Electronic Design, 30 kHz sampling frequency per channel). A Silicon NPN Phototransistor (BPY 62) directed at the screen detected a short light signal under the control of the BeeWorld program and fed it into the ADC input of the analogue to digital converter in order to synchronize the spike data with the walking traces. The background skyline consisted of vertically oriented stripes of different hues of grey and different length. The angular rotation of the checker board pattern was equal to the angular movement of the ball simulating a respective movement of the floor directly below and around the bee. The angular rotation of the stripe pattern was set to 75% of the checker board pattern, and a skyline pattern projected onto the screen together with the stripe pattern moved with 50% angular rotation of the checkerboard pattern. Thus these three patterns simulated depth information by creating a signal of motion parallax to the virtually navigating insect. Direct light from the beamer towards the animal was prevented by a light shade (fig.2a,c).


Figure 2 — Inside the virtual environment setup. A recording situation in virtual scenario7 is shown in panel A. Due to a light shade, the bee (white arrow) is not situated in the course of beam. A stationary walking bee with implanted electrodes is shown in panel B. The setup underneath the cone is shown in panel C. It consists of the treadmill, the optical sensors, the reward appliance and the animal holder with integrated LEDs, light shade and connections to the preamplifiers.


2.3.Electrophysiology in stationary walking bees

Four insulated copper wires (Elektrisola) with a diameter of 0.015mm each were coiled and fixed with superglue or heated (210° for 3 minutes have been appropriate to fuse the polyurethane insulation of the wires without producing a short). These techniques increased the stability of the coiled electrode bundles and facilitated tissue penetration. The best signal to noise ratio was achieved with Teflon insulated silver ground wires with a diameter of 0,125 mm (WPI) which have been implanted into the abdomen or Teflon insulated Platinum/Iridium wires of 0.025mm diameter (advent, Enysham, Oxford, UK) which were thin enough to be implanted in the brain or ocelli, near to the recording wires. The insulation at the tip of these wires was removed mechanically with a forceps. The tip of the tetrode was cut with a fine scissors. The single ends of the copper wires were dipped into hot solder to remove the insulation and then connected with silver conductive paint (electrolube) to the female side of the pins of an IC socket. After drying, the electrodes were electroplated using the low-impedance plating procedure with gold and PEG (Ferguson et al., 2009) using the electroplating device NanoZ (Neuralynx). A scanning electron microscope picture of such a partially plated tetrode was kindly provided by H. Hilger (fig.3).


Figure 3A scanning electron microscope picture from a partially plated tetrode.The polyurethane insulation was melted by heat to fix the coiled structure of the electrode bundle. Both electrodes on the right side show a wobble-like surface texture, due to the gold-PEG plating procedure (Ferguson et al., 2009). The scale bar indicates 10µm. (Picture kindly provided by H. Hilger)


2.4.Preparation of the animals

Animals were caught at the hive entrance or during winter time in an indoor cage with freely flying bees (bumblebees, hornets). The animals were chilled on ice and fixed temporarily in a small tube with modelling clay. A small piece of plastic tube or rubber foam was fixed with dental wax on the thorax as holder for the stationary running animal on the treadmill. A window between the compound eyes, antennae and ocelli was cut into the head capsule. The tip of the electrode was fixed to a fine forceps which was mounted on an external micromanipulator. The electrode was inserted into the brain, while the animal was still harnessed in the tube. After placing the electrodes in the selected brain area (ventral aspect of the mushroom body) under visual control, the electrode was fixed with two component silicone elastomer (kwick sil, WPI) onto the brain and the head capsule. After hardening of the kwick sil, the electrode was released from the external micromanipulator and additionally fixed inside a small slit in the plastic tube/rubber foam holder on the thorax by forming a small loop from the head backwards to the thorax. About 5 minutes later, the bee was released from the tube by grabbing the plastic tube/rubber foam mounted onto the thorax with a forceps and pushing the head with another forceps slowly backwards to facilitate the animals release from the tube without injuring the neck connective. The rubber foam had a slit and the electrodes could be fixed without silicone elastomer. Fixation of the electrodes, however, was crucial for stable and long lasting recordings. Afterwards, the animal was clipped by the plastic tube or piece of rubber foam on its thorax to an alligator clip attached to the electrode holder (fig.2c). Longer electrodes were additionally fixed with modelling clay to the electrode holder in order to prevent reachability of the fine wires from the range of the insect’s legs. Especially during the transfer to the setup after the release from the fixation tube and during the first minutes on the ball the animals often tried to remove the electrodes. The electrode holder with the animal was rotated 90° and the insect was carefully adjusted onto the floating ball. The electrode holder consisted of a small balance which kept the animal on the treadmill with its own weight (kindly provided by R. Menzel). This balance allowed the animal also to change the distance to the surface of the treadmill during walking. The direct light from the LCD projector was shaded in order to prevent direct illumination of the dorsal regions of the compound eyes and the ocelli. Two UV diodes were positioned within the shade just above the head of the animal simulating short wavelength light coming from above (fig.2c). The micromanipulator and binoculars necessary for positioning the tetrode during the preparation procedure were removed from the setup afterwards. Another permanent manipulator allowed precise positioning of the animal on the spherical treadmill.

2.5.Extracellular recordings in freely walking honeybees in a Y-maze

The electrodes are constructed as described in the methods for the recordings in the virtual environement (2.3). The coiled wires and the ground wire were fixed with silicone elastomer (kwick sil, WPI) to a piece of plastic tube (approximately 1cm length). The plastic tube was fixed with dental wax onto the animals thorax before. The plastic tube ensured that the recording wires are out of reachability for the bee's legs. The Perspex y- maze had a length of approximately 15cm, the decision point was 10 cm apart from the starting point of the bee. In the inter trial intervals the bee was placed on a spherical treadmill.

2.7.Identification of the recording site in the brain

The newly developed extracellular fluorescent copper ion detector Flu TPA1 (Taki et al., 2010) was used to identify the recording site in the brain (fig.17c). The dye was solved to saturation in 500 µl PBS buffer (NaCl 137mM, 2.7mM KCl, 8mM Na2HPO4, 1.4mM KH2PO4, pH: 7.2) for each brain. The freshly removed brain was exposed for 20 to 40 minutes to the dye and directly afterwards fixed in paraformaldehyde (PFA, Electron Microscopy Sciences) for 5-8 hours. Afterwards the brain was washed for each 10 minutes in PBS, 50% ethanol (EtOH), 70% EtOH, 90% EtOH, 99% EtOH and three times 100% EtOH to remove the water from the brain tissue. To clear the tissue for confocal microscopy it was transferred in a 2:1 mixture of benzyl benzoate and benzyl alcohol. Confocal scans have been collected with a Leica TCS. The excitation wavelength of Flu TPA1 is 470 nm, the emission wavelength 510 nm.

2.8.Pretraining in freely flying bees

A group of free-flying honeybees, Apis mellifera L., were trained to collect 30% sucrose solution in a feeder located at 10m from the hive (thanks to Jaime Martinez-Harms). Individually marked foragers were selected from the feeders and trained to enter an experimental tunnel to collect 50% sucrose solution from then on. Only one bee was trained at a time. The experimental setup consisted in a tunnel made out of UV-transparent Plexiglas. The set-up allowed to control the colour of the sidewalls and ensured day light condition inside the tunnel. During a pretraining period bees had to learn to walk to the end of the tunnel to collect a sugar reward. During this phase white was used as colour for the floor and sidewalls. Once bees had learned to collect a sugar reward at the end of the tunnel, they were differentially trained to associate a defined arrangement of colours on the sidewalls to the encounter of either positive or negative reinforcement. Using coloured cardboard each sidewall had a different colour. If subject experienced colour A at the right wall and colour B at the left wall as they walked from the entrance towards the end of the tunnel, they were positively reinforced. If subjects encounter colour B at the right and A at the left sidewall, they were negatively reinforced with 1 M KCl. After being reinforced, bees were gently removed from the tunnel to avoid that they would experience the opposite colour configuration walking towards the entrance of the tunnel. The positive and negative reinforced configuration were presented 10 times each in a pseudo-random manner. After 20 training trials bees were transferred to the virtual environment and their response to the training conditions was tested.

2.9.Statistics and Analysis

Matlab (2010b) was used for statistical analysis, the calculation of the Fano Factor (Nawrot et al., 2008) and Coefficient of correlation (Spearman's Rho). The mean and variance underlying the Fano Factor have been calculated via a sliding window. Therefore the spike times have been binned to one second windows. The variance and mean spikes were calculated for a range of 5 seconds with a sliding factor of one. The Fano Factors were calculated for these windows of 5 seconds. The distribution of the data was estimated via the Kolmogorov-Smirnov test, the Lillifors test and the Jarque-Bera test. The data presented in this paper are not normal distributed. Therefore non parametric tests were used, like the Wilcoxon ranksum test to compare two variables or the Kruskal-Wallis test to compare more than two groups. The result of the Kruskal-Wallis test, a stats matrix, was further used to calculate pairwise comparisons with Multiple Comparison tests for different confidence levels. For the spike data (frequency) the mean plus standard deviation was calculated. Due to the high sample size in these cases, the mean plus standard deviation was a valid method to approximate that 68% of the data are in the interval of mean plus one time standard deviation, 95% of the data are found in the interval of mean plus two times standard deviation and 99% of the data are found in the interval of mean plus three times standard deviation. Spike2 (Cambridge Electronic Design) Software was used for data sampling and analysis.


3.1.Extracellular recordings in the virtual environment

The aim of this study was the analysis of neural recordings and simultaneously recorded walking traces of bees exposed to a 360° panoramic pattern of alternating black and white stripes as described in the Methods section (2.2). Three levels of distances were simulated by different angular coupling of the rotatory speed of the walking bee: direct coupling for the horizontal checker board pattern in near vicinity to the animal (ground structure), 75% angular coupling for regularly spaced vertical black or white stripes (20°) on light grey background (objects), and 50% for the background pattern (skyline) as described in the methods. This VE did not allow a control of the translatory component by the bees movements if not mentioned otherwise. Bees exposed to such an environment often showed initially high running activities with random rotatory control of the visual patterns (fig.4).


Figure 4 — Brain activity during virtual navigation in a forager honeybee. Simultaneous recording of the rotation activity (blue trace) and neural activity (black and pink dots, plotted on the right Y-axis, spikes per second). Straight vertically oriented blue lines are plotting artifacts, if the bee crosses the border from 0° to 360°. As seen here, the bees had a tendency for fast rotatory activity during the first period of time in the virtual environment. The activity of unit1 (black dots) is slightly elevated for periods of high rotation activity.


After several hours the animals started to respond to the objects (fig.7b) by temporarily fixing one or the other edge of the objects, and then switching to another edge or object. This edge/contrast selection (Fig.15,16) is characterised by fast turning movements and rest phases before and after selecting the contrast border. Especially in virtual environments with blue stripes the bees were orienting themselves toward stripe edges as shown for pooled duration of stay along four consecutive days in Figure 6. The pooled durations of stay for 4 consecutive days are similar in one drone and one forager bee. Especially in secenario12, with blue stripes (fig.6c,d), the bees have longer durations of stay in vicinity to the white or blue stripes in comparison to the black stripes. The recorded neural activity in the drone starts more than ten seconds before the turning behaviour and lasts for several seconds after finishing the turning manoeuvre (fig.7a). The spike activity was rather constant and the recording had a good signal to noise ratio during the 4 days of this experiment (Fig. 7c). Good signal to noise ratios could also have been achieved for virtually navigating hornets (fig.8), bumblebees (fig.9), wasps (fig.10) and forager honeybees (fig.11) (example original recording traces are shown). In Figure 7a the standardized spike rate during 15 consecutive edge selection events is illustrated. An edge selection is characterized by a turning (rotation) movement towards a contrast (here the border between a white and a black stripe). The resulting movement pattern with fast rotation towards the contrast area and resting phases until the next rotation movement looks like one or more steps (Fig.5a).


Figure 5 — Extracellular recordings in honeybees during virtual navigation. (a) A typical edge/contrast selection walking pattern. At the left border of the figure the virtual cues are indicated as white and black (here light grey) stripes. The rotation (black trace) is plotted on the left Y-axis. The spike frequency of unit1 (grey dots) is plotted on the right Y-axis. The step like pattern is due to fast rotations and pauses in between. (b) A top view into the virtual environment during an experiment. The black arrow is pointing towards the running bee (drone) and the white arrow towards the light shade. (c) Signal to noise ratio of the recording. The bottom trace is the original recording and on top an example of the sorting result. The vertical scale bar indicates 500µm, the horizontal bar a time range of 0.01s.


Figure 6 — Duration of stay for a forager honeybee and a drone in szenario7 (a,b, top row) and szenario 12 (c,d, bottom row). The duration of stay for a forager honeybee (left column) and a drone (right column) in szenario7 is shown in the top row. The grey bar in the background of the figures indicates a black stripe in the virtual szenario7. The pattern consisted of alternating black and white stripes in the virtual environment. One redundant part of this pattern is shown here and the data are standardized to 120° (therefore each bar represents 3x5°=15°). Both bees had a clear peak duration at the release site (right border of the figures) and a tendency to prefer the white stripe. The general angular distribution of stay is much more homogeneous for szenario7 in comparison to szenario12. The solid black line indicates the mean duration of stay, the dashed lines are indicating the mean duration of stay plus one, two or three times standard deviation (SD). The light grey bars (indicated in the background of the figure) were blue in the virtual szenario12 and the dark grey bar was black (bottom row). The place preferences are pooled data for four consecutive days for each bee. The forager bee (110628, c) preferred one locality at the border of a blue stripe (right) and one place in near vicinity to the white stripe in sz12 (bottom row). The drone (110705, d) preferred the same places. Additionally the drone had longer durations of stay for the starting point in the virtual environment (the right and left borders of each plot). The solid black line indicates the mean duration of stay, the dashed lines are indicating the mean duration of stay plus one, two or three times standard deviation (SD).



Figure 7 — Neural correlates of virtual navigation. (a) Spike activity for unit1 (u1) during 15 consecutive edge selection events. Each count of spike activity during the turning movement is standardized to its duration (light grey bar). The spike rate is pooled in one second bins (dark grey bars before (pre) and middle grey bars after (post) the light grey bar in the middle) 10 seconds before (pre) and 15 seconds after (post) the spontaneous turning took place . The solid line indicates the mean spike rate during periods without edge selection (baseline activity – b.a.). The dashed lines indicate the mean spike activity plus one, two and three times standard deviation (SD). The spike activity is increased 10s before the turning behaviour took place and reaches a peak activity shortly after the turning movement. (b) Neural activity of unit1 and rotatory activity (rot) in the same drone. The start of the virtual scene is indicated by the grey and white bars (grey bars have been black during the experiment). Please note the correlation of active navigation, target selection and spike activity (dots, right y-axis). (c) Statistically significant Spearman rank correlation between rotation and spike activity for unit1 at day four (rho= 0.96, p=1e-13). The lines indicate mean and mean plus two times SD for the spike activity and the rotation. The right and left borders of the figure are the starting points of the virtual scene. The data are standardized to 120° of the 360° panoramic virtual environment. The dark grey bars are representing the translation for 3x5°=15°, respectively.


Figure 8 — Recording quality during virtual navigation. Example trace for a hornet.


Figure 9 — Recording quality during virtual navigation. Example trace for a bumblebee.


Figure 10 — Recording quality during virtual navigation. Example trace for a wasp.



Figure 11 — Recording quality during virtual navigation. Example trace for a forager honeybee.


The light grey bar in Figure 7a (rotation) is representing the mean rate of 15 edge selections in the same animal during the turning period. The spike rate per turning manoeuvre was divided by the duration of this behaviour. The dark grey bars on the left (fig. 7a) represent one second bins of the spike rate before the turning manoeuvre took place. Every bin contains the mean over 15 consecutive edge selection events. The same applies for the phase after the turning behaviour, which is represented by the medium grey bars. The horizontal line (fig. 7a) depicts the mean frequency for unit1 during a non edge selection phase. The other dashed horizontal lines represent the mean plus one, two or three times standard deviation (SD). The spike rate increases more than 5s before the turning behaviour started. The peak frequency of about 5Hz was reached after the turning behaviour ceased, when the animal was resting again. Afterwards the spike frequency decayed slowly (fig. 7a). In Figure 7c, a statistically significant correlation (Spearman rank correlation, p=1e-13, rho=0.96) between the spike frequency and the amount of rotation of this drone is shown. In contrast to the rank correlation between the spike rate and the rotation, which is more or less constant at day 4 and day 2 (Fig. 12, p=2.6e-6; rho=0.91), the rank correlation between the spike rate and the forward running activity is markedly increased from the second day (Fig. 13, p=0.0013 ; rho=0.63) to the fourth day (Fig. 14, p=8.7e-13; rho=0.95) in this drone.


Figure 12 — Spearman rank correlation between unit1 rate and rotation. The correlation between rotatory activity in drone 110705 at the second day of the experiment is statistically significant (p=2.6e-6; rho=0.91). The background pattern of the figure is indicating the virtual environment. In this case alternating white and black stripes (szenario7). The rotation activity was standardized to the smallest part of the redundant pattern (120°). The solid black line indicates the mean rotation (rot), the dashed lines are indicating the mean duration of stay plus two times standard deviation (SD). Please note, that only the left Y-axis (rotation) is starting at -10 (in order to show possibly occurring backward running). The right Y-axis starts at 0.


Figure 13 — Spearman rank correlation between unit1 rate and translation. The correlation between forward running activity in drone 110705 at the second day of the experiment is statistically significant (p=0.0013; rho=0.63) but not very strong (rho between 0 and 1, with one for total correlation). Note the higher amount of forward running on the white stripe. The dark grey bars are representing the translation for 3x5°=15°, respectively. The background pattern of the figure is indicating the virtual environment. In this case alternating white and black stripes. The translation activity was standardized to the smallest part of the redundant pattern (120°). The solid black line indicates the mean translation (meantrans), the dashed lines are indicating the mean duration of stay plus two times standard deviation (SD). Please note, that only the left Y-axis (translation) is starting in the negative range. A negative translation would indicate backward running.


Figure 14 — Spearman rank correlation between unit1 rate and translation. The correlation between forward running activity of drone 110705 at the fourth day of the experiment is statistically significant (p=8.7e-13; rho=0.95). Note the decrease of total forward running in comparison to the second day. The dark grey bars are representing the translation for a range of 3x5°=15°, respectively. The background pattern of the figure is indicating the virtual environment. In this case alternating white and black stripes. The translation activity was standardized to the smallest part of the redundant pattern (120°). The solid black line indicates the mean translation (meantrans), the dashed lines are indicating the mean duration of stay plus two times standard deviation (SD). Please note, that only the left Y-axis (translation) is starting in the negative range. A negative translation would indicate backward running.


Additionally, the rank correlation between the rotatory activity in the virtual environment and the forward running component (translation) is increased from the second day of recording (not shown, p=4.15e-6, rho=0.8) to the fourth day (not shown, p=1.4e-6, rho=0.97). The higher amount of rotation and spike rate for the 0° and 120° bins is due to the starting point (release) of the bee in the virtual environment at these places. Similar neural activity patterns, preceding selection of contrast areas, were observed in a wasp (fig.15) and a hornet (fig.16). This hornet had also translatory control of the virtual environment. The angular size of the projected structures was controlled by forward or backward movements in order to simulate distance in this particular case.


Figure 15 — Brain activity during virtual navigation in scenario7 (indicated by the stripe pattern in the figure background). The wasps rotation activity is shown as blue line (referring to the left Y-axis). Elevated unit1 spike rate (green dots, right Y-axis) precedes turning movements of the wasp.


Figure 16 — Brain activity during virtual navigation in scenario7 (indicated by the stripe pattern in the figure background). The virtual reality was presented throughout this trial. The animal had rotatory as well as translatory control of the presented pattern (rotation and „zooming-in“). The hornets rotation activity is indicated as blue line (referring to the left Y-axis). Elevated unit1 spike rate (black dots, right Y-axis) precedes turning movements. The spike trace of this trial ends at approximately second 4100.


3.2.Recordings in pretrained bees in the virtual environment

In a second approach in the virtual environment it was tested whether learned visual patterns during free flight (bee pretraining was done by Jaime Martinez-Harms) are also recognized during virtual navigation. Therefore, freely flying honeybees were trained to enter a Perspex tunnel if the CS+ (conditioned stimulus) colour combination was displayed on the bottom as floor pattern inside the tunnel (CS+: blue left, yellow right; CS-: yellow left, blue right). At the end of the tunnel, the bees were rewarded with sucrose solution, in case of the CS+ colour combination. Bees did not get any reward if they entered the tunnel in the CS- colour combination (for a detailed description see methods 2.8). After one day of free flight training with several trials, the marked bees were caught at the end of the tunnel and transferred to the virtual environment. Preparation and electrode implantation were done as described in the methods section 2.4. The CS- and CS+ patterns were presented in a pseudo-randomized order in the virtual environment. CS+ trials were rewarded after recording the running velocity for a time range of 30-60s in order to prevent extinction effects (Bittermann et al., 1983). For analysis, the bees were split in two groups. The learner bees (n=2) learned the discrimination task during free flight and virtual navigation. These bees showed a statistically significant faster forward running velocity to the CS+ than to the CS- colour combination in the virtual environment (Wilcoxon ranksum test:p=0.0315, n=6 trials) (Fig. 17a).


Figure 17 —  Shift from real world memory to virtual navigation. During free flight the bees learned to enter a tunnel with a specific colour pattern on the ground (CS+). The CS+ colour combination was rewarded with sugar water. The CS- situation was negatively reinforced. (a) In the virtual test, two of four pretrained bees show significantly higher forward running velocities if the CS+ colour pattern was projected in the virtual scene (learner bees, Wilcoxon ranksum test, alpha=0.05, p=0.032) against the alternative CS- pattern. (b) In the non learner bees no such effect exists. (c) The recording site was identified via a fluorescent copper ion detection dye, suitable for confocal microscopy (Taki et al., 2010). The black arrow points towards the recording place at the ventral border of the left alpha lobe, a part of the mushroom body. The right calyx is labelled (ri Ca). (d) In one learner bee, the burst activity (measured as Fano Factor (FFu1)) of unit1 was significantly reduced for the CS+ (Multiple Comparison test, based on Kruskal Wallis result (prob.>Chi-sq.=4.5e-23), confidence level (CL)=99%, confidence interval (CI)=[42.8 - 195]) and the CS- (Multiple Comparison test,CL=99%, CI=[140.8 – 264.5]) trials in comparison to a control (Co) situation without a projected environment. The spike variability was significantly lower in the CS- situation in comparison to the CS+ trials (Multiple Comparison test, CL=97.5%, CI=[1.4 – 166]), (le=left, ri=right, mid=middle).


The Non learner bees (n=2) belong to the second group. They did not transfer the visual discrimination task to the virtual situation. There is no statistically significant difference between the running velocity of the non learner bees towards the CS+ or the CS- colour condition (Wilcoxon ranksum test: p=0.8518, n=6 trials) (Fig. 17b). For one learner bee (110904J22) the bursting intensity is significantly different in unit1 and 2 for the rewarded and unrewarded stimulus situation in the virtual environment. These two units are spiking antagonistically in bursts. In most cases the Fano factor is high in unit2 and low in unit1 for CS+ trials and oppositely for the CS- trials (Fig. 18).


Figure 18 — Relationship between forward running speed and spike variance in units 1 and 2 for 7 consecutive training trials in the virtual test situation in one learner bee. The pretrained bee was either placed in a virtual szenario with a rewarded colour combination (CS+) or in a non rewarded colour combination (CS-). In order to prevent extinction, the virtual CS+ trials have been rewarded with sugar water. The spike variance of units 1 and 2 is measured as Fano factor (FF) and plotted on the right Y-axis. The forward running velocity is plotted on the left Y-axis. For most CS+ trials, the Fano factor is high for unit2 and low for unit1 and the other way around for the CS- trials. This tendency seems to be independent of the walking speed, which decays during ongoing tests in the virtual situation.


For unit1 the Fano factor is statistically significant reduced in the CS- (Multiple comparison test based on Kruskal-Wallis test results (p=4.5e-23): Confidence Interval (CI) for 99% Confidence Level [140.8 : 264.5]) and the CS+ (Confidence Interval (CI) for 99% Confidence Level [42.8 : 195]) colour condition in comparison to a control situation in which the bee was recorded inside the virtual arena but without any projected visual pattern (Fig.17d). For unit 2 in the same animal during the same trials, the Fano factor is only reduced in the CS- condition (Fig. 19).


Figure 19Bursting activity of unit2 during the virtual test in a pretrained honeybee. The Fano factor (FF) is calculated as measure of spike variance in order to show bursting activity of unit2 (u2) spikes during seven (4xCS+, 3xCS-) consecutive tests of one pretrained honeybee in the virtual environment. The bursting activity was statistically significant reduced in the CS- test in comparison to the control situation without any projected pattern but inside the virtual arena (Multiple comparison test based on Kruskal-Wallis test results (p=9.4e-9): Confidence Interval (CI) for 99% Confidence Level (CL) [67.2 : 191.2]). The same applies for the comparison of CS+ and CS- situation. The bursting activity of the CS- situation was significantly reduced in comparison to the CS+ situation (Multiple comparison test based on Kruskal-Wallis test results (p=9.4e-9): (CI) for 99% CL [11.9 : 195.9]). The bursting activity during the CS+ situation was not statistically significant different from the control situation in the virtual environment (Multiple comparison test based on Kruskal-Wallis test results (p=9.4e-9): CI for 90% CL [-28.2 : 78.7]). (n for FF in different groups: Control: 486, CS+:53, CS-:84, sliding window for calculation of FF, see methods 2.9).


The Fano factor was calculated with a sliding window. The raw spike times have been binned for 1 second. The recording site was identified with a fluorescent probe for copper ion detection (Taki et al., 2010) (confocal scan, Fig. 17c). The scale bar is indicating a range of 100µm. A black arrow is showing the position of the electrode tip at the ventral border of the alpha lobe, a part of the mushroom body. The white area is the stained brain tissue.

3.3.Extracellular recordings during decision making in a Y-maze

The spike activity of two recorded units during 18 consecutive spontaneous visual decisions in a Perspex Y-maze with coloured floor patterns was pooled. The bees showed a side preference rather than a colour learning. The frequency for unit1 and unit2 was elevated during the decision period in comparison to a non-decisive control situation during stationary walking (Fig.20). Unit1 activity increased during the whole calculated period of the decision process from one second before the decision until three seconds after the decision for one of the two arms of the maze above the mean plus three times standard deviation level in a control situation . Unit2 activity increased (above the level of mean control spike activity plus three times standard deviation) for the first two seconds after the behavioural decision to enter one of the arms of the Y-maze.


Figure 20 — Changes in neural activity during behavioural decision making. One freely running bee was recorded during 18 consecutive decisions in a y-maze. The bee did not learn a colour pattern presented on the ground but developed a side preference. Two units were recorded. Unit1 (u1) spiking activity increases from 1s before the decision (pre) till three seconds after the decision (post). Unit2 (u2) spiking activity increases during the two seconds after the behavioural decision. In the control situation the bee was walking stationary on a Styrofoam sphere, without a decision task.



Spontaneous focusing onto stripe patterns as shown in Figure 7b and associated changes in neural activity in a drone, a hornet (fig.16) and a wasp (fig.15) indicate that bees as well as other hymenopteran insects are able to actively navigate in visual virtual environments. Nevertheless, it was not possible to train bees to select particular visual patterns in the virtual environment. Since 50% of the pretrained bees are able to transfer a learned colour pattern during free flight to a virtually presented colour pattern (fig. 17a, b), the reason for the absence of learning in the virtual environment can not simply be due to primary sensory conflicts like insufficient visibility of beamer projections for the bee compound eyes. As Karl von Frisch stated (1914) the ability of learning is related to the animals necessities. During free flight training of colour patterns, the bees are allowed to keep their own pattern of task timing. Usually the bees are visiting an artificial feeding station with some kind of cue and suck sugar water. Afterwards they are flying back to the hive, their social context. This procedure and the ongoing social contact might be important for visual learning or at least the motivational basis for doing so. Karl Weiss (1954) showed that walking bees and wasps are indeed able to learn rules, associated with colours during navigation in a flat maze. Additionally, different motivational states of the animal may influence the walking pattern in the virtual environment. Gain state influenced changes in optomotor head movements have been described for blowflies (Rosner et al., 2009). Motivationseems to be an important point for performing operant learning tasks in the virtual environment. An animal which is stressed by the experimental procedure or the preparation might be in the flight mode rather than any learning related behavioural mode. The next step will be to improve the immersive properties of the virtual environment. Especially translatory feedback for the movements of the animal on the floating ball in order to optimise the virtual environment as illusion of a natural situation for the insect. This environment might also include different odorants. It has to be elucidated in how far olfactory and visual cues interfere in walking honeybees. For the fruitfly Drosophila melanogaster it was shown that olfactory modification of visual reflexes takes place in the mushroom body (Chow et al., 2011). In restrained honeybees odour information seems to be dominant (Niggebrügge et al., 2009). Chemosensory dominance has also been observed for freely flying bees (von Frisch, 1914). Additional evidence for active virtual navigation of honeybees is seen in the comparison of duration of stay for four consecutive days in two different virtual environments in a forager honeybee and a drone. Both had nearly the same spatial distribution of peak durations of stay for the scene with blue stripes and nearly no peaks in the distribution of duration of stay for the pattern with black and white stripes (fig. 6). Such observations and exactly the imperfect immersive properties of the virtual environment or at least different scenes might help to learn more about the way bees and other insects are recognizing their environment and which conditions are mandatory for learning and navigation tasks in bees. An operant learning paradigm for honeybees in the virtual environment is a goal for the future and might be a great advantage in order to study the neural basis of behavioural decision making in an insect model. The recorded unit1 in the drone, presented in Figure 7 seems to be related to arousal in the broader sense. This unit is active in the pre decision phase (maybe visual orientation by the animal) and even after the movement has been finished (fig. 7a). The long pre and post rotation activities of unit1 exclude mere motor related neural activity. Similar neural activity patterns were also observed in other virtually navigating hymenopteran insects (fig. 15,16). Possibly such activity patterns are used as gateway to recruit other parts of the network in a behaviourally defined time range. Additionally, it seems to exist some degree of plasticity in the correlation of neural activity and movement pattern. The rotatory activity is positively correlated with neural activity throughout the 4 days of recording in this drone (fig.7,12). In contrast, the translatory activity, which has no control function for the surrounding virtual environment in this case is stronger correlated with neural activity and rotation at the 4th day in comparison to the 2nd day of the experiment (fig. 13,14). Cortical plasticity can improve the performance to control prosthetic devices by neural implants in monkeys also in different brain areas (Musallam et al., 1994). A situation dependent change of neural activity in the mushroom body was observed in one pretrained forager bee. The neural activity did not alter in the total mean frequency but in its bursting activity during shorter time windows. In a control measurement, the variance of spike times (which is high during ongoing short burst bouts) was high in comparison to the learned CS+ and CS- trials. Interestingly, the burst activity was lowest for the CS- colour condition (Fig. 17d). The bursting activity for unit1 and 2 is different for CS+ and CS- trials, as seen in Figure 18. These data give evidence, that bursting or variability in spike trains may have coding properties during operant learning or discrimination tasks. Variability has been described in detail for cortical spike trains (Nawrot et al., 2008). The virtual environment shifts the bee navigation to the laboratory setup. Another approach is to combine electrophysiologal recordings with navigation in a real setting, here a Y-maze. The combination of both approaches has the potential to give first insights into the neural basis of behavioural decision making and navigation in eusocial insects. The presented methods might also be interesting in order to study possible neural mechanisms of place learning in insects. In Figure 20, neural correlates of decision making in a Y-maze are shown. The spike activity of unit1 is already increased about one second prior to the decision to enter one of the two arms of the Y-maze. The second unit had higher activity levels after the decision took place. It seems as if a decision process is temporarily resolved in slightly deferred activity patterns of different neurons. Cells participating in different sequences of a decision trial at distinct points in time have been shown in a recent study in virtually navigating mice (Harvey et al., 2012). Why is it important and necessary to develop methods in order to study the neural correlates of decision making in insects? Humans and insects decide in similar ways (Louâpre et al., 2010). Since artificial intelligence and biologically inspired robots are a growing field of interest, conceptual insights to brain processes that underlie decision making and navigation might be of interest with respect to the development of biologically inspired algorithms and autonomously acting agents (Behnke and Rojas, 2001, Riedmiller, 2005, Lauer et al., 2011). Comparatively small insect brains, with identifiable, single neurons might be well suited for such purposes. In this article, two combined or individually usable methods for studying the neural correlates of behavioural decision making and navigation in insects of about honeybee size are described. The data based on the newly established methods give to my knowledge the first evidence for distinct timing of brain activity during behavioural decisions in honeybees.


The Project was funded by the BMBF Bernstein Focus Insect Inspired Robots: Towards an Understanding of Memory in Decision Making.

6.Author information

Nora Vanessa de Camp

Humboldt-University Berlin, Behavioural Physiology, Invalidenstr. 43, 10115 Berlin



de Araujo, D.B., Baffa, O., Wakai, R.T. (2002) Theta Oscillations and Human Navigation: A Magnetoencephalography Study. Journal of Cognitive Neuroscience, 14, 70-78.

Behnke, S., Rojas, R. (2001) A Hierarchy of Reactive Behaviors Handles Complexity. In Proc. ECAI2000 Workshop Balancing Reactivity Social Deliberation in Multi-Agent-Systems, 125-136.

Bittermann, M.E., Menzel, R., Fietz, A., Schäfer, S. (1983) Classical Conditioning of Proboscis Extension in Honeybees (Apis mellifera). Journal of Comparative Psychology, 97, 107-119.

Denker, M., Finke, R., Schaupp, F., Grün, S., Menzel, R. (2010) Neural correlates of odor learning in the honeybee antennal lobe. European Journal of Neuroscience, 31, 119-33 (2010).

Dombeck, D.A., Harvey, C.D., Tian, L., Looger, L.L., Tank, D.W. (2010) Functional imaging of hippocampal place cells at cellular resolution during virtual navigation. Nature Neuroscience, 13, 1433-1440.

Ferguson, J.E., Boldt, C., Redish, A.D. (2009) Creating low-impedance tetrodes by electroplating with additives. Sensors and Actuators A: Physical, 156, 388-393.

von Frisch, K. (1914) Der Farbensinn und Formensinn der Bienen. Zoologische Jahrbücher, Abteilung Allgemeine Zoologie und Physiologie, 35, 1-238.

Gillner, S., Mallot, H.A. (1998) Navigation and Acquisition of Spatial Knowledge in a Virtual Maze. Journal of Cognitive Neuroscience, 10, 445-463.

Harvey, C.D., Coen, P., Tank, D.W. (2012) Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature, 484, 62-68.

Harvey, C.D., Collman, F., Dombeck, D.A., Tank, D.W. (2009) Intracellular dynamics of hippocampal place cells during virtual navigation. Nature, 461, 941-946.

Höllscher, C., Schnee, A., Dahmen, H., Setia, L., Mallot, H.A. (2005) Rats are able to navigate in virtual environments. Journal of Experimental Biology, 208, 561-569.

Kober, S.E., Kurzmann, J., Neuper, C. (2012) Cortical correlate of spatial presence in 2D and 3D interactive virtual reality: An EEG study. International Journal of Psychophysiology, 83, 365-374.

Lauer, M., Schönbein, M., Lange, S., Welker, S. (2011) 3D-objecttracking with a mixed omnidirectional stereo camera system. Mechatronics, 21, 390-398.

Lee, J.H., Kwon, H., Choi, J., Yang, B.H. (2007) Cue-Exposure Therapy to Decrease Alcohol Craving in Virtual Environment. CyberPsychology & Behavior, 10, 617-623.

Lindauer, M. (1959) Angeborene und Erlernte Komponenten in der Sonnenorientierung der Bienen. Zeitschrift für Vergleichende Physiologie, 42, 43-62.

Louâpre,P., van Alphen, J.J.M., Pierre, J.S. (2010) Humans and Insects Decide in Similar Ways. PloS One, 5, e14251.

Mauelshagen, J. (1993) Neural Correlates of Olfactory Learning Paradigms in an Identified Neuron in the Honeybee Brain. Journal of Neurophysiology, 69, 609-625.

Menzel, R., Giurfa, M. (2006) Dimensions of Cognition in an Insect, the Honeybee. Behavioral and Cognitive Neuroscience Reviews, 5, 24-40.

Menzel, R. (1975) Electrophysiological Evidence for Different Colour Receptors in One Ommatidium of the Bee Eye. Zeitschrift für Naturforschung, 30 c, 692-694.

Musallam, S., Corneil, B.D., Greger B., Scherberger H., Andersen R.A. (1994) Cognitive Control Signals for Neural Prosthetics. Science, 305, 258-262.

Nawrot, M.P. et al. (2008) Measurement of variability dynamics in cortical spike trains. Journal of Neuroscience Methods, 169, 374-390.

Niggebrügge, C., Leboulle, G., Menzel, R., Komischke, B., Hempel de Ibarra, N. (2009) Fast learning but coarse discrimination of colours in restrained honeybees. Journal of Experimental Biology, 212, 1344-1350.

Okada, R., Rybak, J., Manz, G., Menzel, R. (2007) Learning-Related Plasticity in PE1 and Other Mushroom Body-Extrinsic Neurons in the Honeybee Brain. Journal of Neuroscience, 27, 11736-11747.

Peng, Y., Xi, W., Zhang, W., Zhang, K., Guo, A. (2007) Experience Improves Feature Extraction in Drosophila. Journal of Neuroscience, 27, 5139-5145.

Riedmiller, M. (2005) Neural Fitted Q Iteration – first experiences with a data efficient neural reinforcement learning method. In Proc. of the European Conference on Machine Learning, ECML 2005, Porto, Portugal, October 2005a.

Rosner, R., Egelhaaf, M., Grewe, J., Warzecha, A.K. (2009) Variability of blowfly head optomotor responses. Journal of Experimental Biology, 212, 1170-1184.

Chow, D.M., Theobald, J.C, Frye, M.A. (2011) An Olfactory Circuit Increases the Fidelity of Visual Behavior. Journal of Neuroscience, 31, 15035-15047.

Rybak, J., Menzel, R. (1998) Integrative Properties of the Pe1-Neuron, a Unique Mushroom Body Output Neuron. Learning & Memory, 5, 133-145.

Strube-Bloss, M.F., Nawrot, M.P., Menzel, R. (2011) Mushroom Body Output Neurons Encode Odor-Reward Associations. Journal of Neuroscience, 31, 3129-3140.

Taki, M., Iyoshi, S. Ojida, A., Hamachi, I., Yamamoto, Y. (2010) Development of Highly Sensitive Fluorescent Probes for Detection of Intracellular Copper(I) in Living Systems. Journal of the American Chemical Society, 132, 5938-5939.

Van Swinderen, B., Greenspan, R.J. (2003) Salience modulates 20-30 Hz brain activity in Drosophila. Nature Neuroscience, 6, 579-586.

Weiss, K. (1954) Der Lernvorgang Bei Einfachen Labyrinthdressuren von Bienen und Wespen. Zeitschrift für Vergleichende Physiologie, 36, 9-20.

Wolf, R.,Heisenberg, M. (1991) Basic organization of operant behavior as revealed in Drosophila flight orientation. Journal of Comparative Physiology A, 169, 699-705.

Information about this Article
This Article has not yet been peer-reviewed
This Article was published on 1st April, 2013 at 10:57:05 and has been viewed 7474 times.

Creative Commons License
This work is licensed under a Creative Commons Attribution 2.5 License.
The full citation for this Article is:
de Camp, N. (2013). New methods for extracellular brain recordings in stationary and freely walking honeybees during decision making and virtual navigation. PHILICA.COM Article number 373.

<< Go back Review this ArticlePrinter-friendlyReport this Article

Website copyright © 2006-07 Philica; authors retain the rights to their work under this Creative Commons License and reviews are copyleft under the GNU free documentation license.
Using this site indicates acceptance of our Terms and Conditions.

This page was generated in 0.5824 seconds.