Nuiinitialize failed relationship

How to Let Go of a Failed Relationship (with Pictures) - wikiHow

Hi Todd, yes you must have attached Kinect and call NuiInitialize() to use the face tracking API .. need to know the relation between the texture and the depth. . About all the 4 kinds of tracking failed, could you give me any. It has relevant functions such as NuiInitialize, that allows. 29 .. Tracking until it is desired to stop face tracking or face tracking failed, for example, . discussed here because of their logical relation to the Candide-3 model. The installation of these are failing on my machine with a working with MSR Kinect SDK (NuiInitialize failed) on ASMedia USB3 controller, looks like. . POWER MSI is proud to be in a continuing and close relationship with ASMedia .

Figs 16A and show the front interchangeable toilet seat. Attached and detached form the housing base, in accordance with another embodiment. Figs 17A and shows the interchangeable seat, exploded view inside the housing, the right side and left side, in accordance with another embodiment.

Figs 18A and 18B show the width adjustable attaching plate unsecured, and secured to the assembly. Fig 19 shows the toilet seat in the up position, attached to a toilet. Fig 20A front view of Kinect seat and lid assembly on toilet. Fig 20B close up front view of Kinect seat and lid assembly, on toilet. Fig 21A and Fig 21B, view inside housing of Kinect seat and lid assembly. Fig 22A view Kinect seat assembly, on toilet. Fig 22B view inside Kinect assembly housing, on toilet.

Fig 23A side view Kinect assembly seat. Fig 24 back view inside housing of Kinect seat assembly. Fig 25 back inside view housing of Kinect seat and lid assembly.

Asmedia Usb3

A housing assembly baying a base 48, front 44, left side 50, right side 52 and back 46, where the parts of the housing components meet, are rubber gaskets, silicon caulking, o rings 80, or other water sealant materials, used to prevent water from entering the housing.

Many components of the assembly are made of high strength water resistant plastic, and other materials. Some of components of the assembly are secured to each other, by plastic screws and stainless steel zinc coated screws and nuts. A toilet seat 40 is connected to two rotating shafts From the toilet seat the shafts goes through washers 84 that are between the seat and housing.

One shaft 78 enters the housing through the right side The shaft travels through a lubricated o ring The length of the shaft 78 then goes through, two plastic support ball bearings 82 that are mounted to the housing.

The shaft 78 connects to a rotation sensor 88, which is mounted to the housing. The rotation sensor 88, is connected to, a microcontroller, microcomputer The microcontroller is programmed, with Sensory's TrulyHandFree voice technology, speech recognition, voice recognition software.

Fig 76 The user can chose a male or female voice for the microcontrollers synthesized talking voice. Programming of the assembly, is accomplished, by following programming manuals that accompany Sensory's TrulyHandsFree speech software.

A water resistant microphone 62 is connected to the microcontroller The left side shaft, enters the left side assembly wall 52, going through the plastic washer 84, the o ring 80, two ball bearings 82, the end of the shaft 78 connects, to a servo motor The servo motor 86 connects to, a servo controller 96 and the microcontroller Fig 3A.

A system reset button 69, connects to the microcontroller The servo controller 96 connects to a rechargeable lithium ion battery 64, and an alternating current to direct current adapter Fig 2B shows a system on off button 63, which connects to the servo motor controller In listening mode the system uses a minimal amount of power. The system activates when hearing a user command. A user can verbally direct the seat 40 to move up or down to a resting position.

Saying the word up, moves the seat 40 up, saying the word down, moves the seat 40 down. The seat 40 rests on the toilet bowel rim when it's in the down position, when the seat 40 is in up position the seat rests against the toilet water tankbathroom wall, or the seat assembly.

Spoken words are sensed by the microphone 60, which sends signals to the microcontrollerthe words are analysed by the microcontrollers voice recognition software The words meanings and intent are analysed, the users desires are understood, the right system information is found, the system responds with the desired function. The system matches the spoken words to a system response, such as, raising or lowering the seat. Pressing the reset button 69, will turn off, then turn on the system, including the microcontrolleror computer Power Supply The battery connects to the motor controller and the alternating current to direct current adapter 72, abbreviation ACDC adapter A low battery light indicator 68 connects to the microcontrollerand motor controller The battery 64 can also be changed while in the assembly Attaching The Assembly To TheToilet Description Figs 18A and 18B shows the base 48 of the housing attached to a plate that is attachable to a toilet The toilet plate attaches to the toiletby two bolts 76 and two nuts The plate can be adjustable for the two toilet holes not shownwidth distance from each hole.

The housing assembly slides on to the plateand locks on to the plate by moving a sliding latchinto locked position. Figs 4A and 4B shows a longitudinal adjustable plate, which the assembly attaches to. Figs 18A and18B shows a plastic platethat allows for two bolts 76 and nuts 74, that can slide width wise to adjust for differing toilet bowel hole distances, the plate attaches to the toilet Figs 4A and 4B shows a plate that allows the two bolts to slide longitudinally.

The assembly slides on to the anchor plate. Figs 4A, 4B, 18A and 18B show how the plate is secured to the assembly by sliding a latch downthus securing the assembly to the toilet Other seat assembly and toilet connections include Fig 19 which show the seat assembly permanently attached to the toilet, the seat assembly and toilet being one assembly. Fig 6B shows the seat and lid seat assembly permanently attached to a toilet, the seat and lid assembly, and toilet being one assembly.

The housing has a one way water drain 94 Fig 8Blocated at the base. Any possible moisture is directed to the outside of the housing. The housing is water resistant and water proofed. The microcontroller incorporates a microprocessor, central processing unit with a speed of 1. A video camera, video cam, web cam 58 connects to the microcontroller Luxand facial recognition software is programmed using Luxand programming manuals. The camera 58 looks out of the front of the housingwith an unobstructed view, whether the seat 41 is positioned up or down.

The system is connected to a reset button 70, and can be reset by pushing the system reset button 70, Figs 5 A and 5B. Facial Recognition Operation The assembly incorporates a facial recognition system The system may use a computer not shown or microcontroller, which are programmed with facial recognition software, as illustrated in Figs 3A, 3B and 9A.

The facial recognition and speech recognition software enables the system to identify, recognize a user, and associate the user to their system favourites. The facial recognitions system visually identifies a user. When the user is identified the system moves the seat 41 to a position, which is the users, usual desired position.

KinectExtras with MsSDK | serii.info - Technology, Health and More

The system remembers a user's preferences, from their initial use of the system. User preferences or settings may include seat 41 position, sound effects or music, desired seat 41 positions when vacating the assembly. The facial recognition software is based on the ability to recognize a face and measure the various features on the face. Two dimensions, 2D, or three dimension, 3D, facial recognition software can be used, to identify a user. A video camera 58 or a web cam 58, positioned in the front of the housingviewing through a transparent window 56, permits an unobstructed view of a user, whether the seat 41 is up or down.

The video cam 58 is connected to the microcontroller and supplies images of users to the facial recognition software programmed in the microcontroller A user only needs to be within view of the video cam 58 to activate the facial recognition software. The distance at which facial recognition software activates the system, and moves the seat and lid to the user's usual positions, is adjustable by the user. Motion Sensing Kinect Sensor Device Description An alternate embodiment incorporates a Kinect motion sensing, voice recognition and facial recognition device made by the Microsoft Company.

In one embodiment the Kinect connects to a microcontrollershown in fig 22A, fig 22B, fig 23, and fig In another embodiment the Kinect is connected to a computer The computer may be a Toshiba laptop computer made by the Toshiba company, with a Intel i3 CORE processor, made by the Intel company, or a net book computermade by a variety of different computer companies, as shown if fig 20, fig 21, fig The microcontroller is programed using, Kinect for Windows software developer's kit, SDK The assembly connects to the computer The computer is inside the housing The computer connects to the servo controller The computer connects to the servo motors The computer or microcontroller connects to a reset button The computer connects to the Internet using a Ethernet cable plug outletor wirelessly using a WI-Fl card The computer connects to a touch sensitive holographic projector The Kinect sensors look out of the front of the housingwith an unobstructed view of the user area in front of the housing and toilet Kinect Motion Sensing Kinect Sensor Device Operation An additional embodiment uses the Kinect motion sensor connected to a computer, XBOX consoleor to a microcontroller to position the seat and lid ,shown if fig 20, fig 21, fig 22A, fig 22B, fig 23, fig 24, and fig The Kinect system uses facial recognition software to identify a user.

The user waves a hand in front and within view of the Kinect microcontroller assemblyor the Kinect computer assembly to activate the Kinect sensor The Kinect system uses skeletal mapping to analyze body gestures, for movement and position. The gestures are associated to, and activate system functions. The Kinect enables the user to control and interact with the system without the need to touch the seatto raise or lower it. To raise or lower the seatthe user simulates raising or lowering the seat with their hand.

The assembly's seat follows the hand motion of the user raising and lowering the seat To move the lid the users simulates the raising and lowering of the lid with their hand. The seat and lid move separately. If the lid is resting on the seat the hand simulating raising, will raise the lid first, then a further simulated raising with the hand will raising the seat If both seat and lid are in the up position, the hand simulating lowering with lower the lid and a further simulated lowering will lower the seat The seat and lid are programmed to move one at a time, to avoid moving in a direction against each other, ex.

The Kinect utilizes a natural user interface for sensing body and hand gestures and spoken commands. Vocal commands can be used to raise and lower the seat and lidusing the systems voice recognition software Vocal commands or body and hand gestures can be used, to control the systems functions such as, music and sound effects, etc.

The user can access the internet with vocal commands, through the microphones the retrieved information is broadcast audibly to the user by the speaker The Kinect uses four microphones to recognize and separates a user's voice from other noises in the room.

The user may verbally chat with other users, using the internet connection. The device creates a digital skeleton of a user, based on depth data that tracks the entire body. The device uses a red, green, blue light detecting camera, RGB cameradepth sensor and multi array microphone running proprietary software, which provide full body three dimensional motion captures, facial recognition, gesture recognition and voice recognition capabilities.

The depth sensor consists of an infrared laser projector combined with a monochrome complementary metal oxide semiconductor CMOS sensorwhich captures video data in three dimensions 3D, under any ambient light conditions. The motion sensor can be used at night in darkness, without light, or in lighted environments.

The Kinect which is connected to the computeris programmed using, the Microsoft windows commercial software development kit, SDK. The Kinect which is connected to the microcontrolleris programmed using, the Microsoft windows commercial software development kit, SDK. The Kinect is connected to the computerusing the FAAST key mapping software is one of many ways to program the motion sensing system, show in fig Key mapping associates body gestures to the activation, pressing of a key on a keyboard The key activation is associated with the activation of a system function.

For example, the simulated raising of the seat, by a user's hand, activates the letter b on the keyboardthe activated letter b, instructs the computer to raise the seat Kinect Speech Recognition The Kinect software programmed in the computer or microcontroller matches a word or phrase to a system function.

The speech recognition programming software listens for speech grammar, words that are associated with system functions. If the user says the word up, and the seat and lid are resting on the toilet bowel in the down position, the associated system function is to have the computer move the seat and lid to the up position. Using the Kinect speech recognition software, the Kinect system connects to the internet. Combined with the internet search engine Bing made by Microsoft, the system will retrieve information from the internet.

The internet can be accessed verbally using the microphoneshown in fig 20A. The retrieved text information is converted to synthetic speech conveyed to the user through the speaker Internet audio sound is conveyed through the speaker Kinect Assembly Holographic Internet Web Browsing The internet can be accessed using the touch sensitive holographic projectorillustrated in fig 20A. The holographic projector projects on to a surface or into space.

The internet holographic display is similar to a computer touch screen display presentation, displayed on a surface or into the air. The user touches the holographic images, touchable hologramsof the internet web pages, or displayed keyboard, to navigate the internet.

Internet information is displayed holographically ; web pages, internet video internet TV, internet games, sound is broadcast through the speaker User videos on a secure digital card can be placed in the secure digital card reader 77, and can be played and viewed holograpicallyshown in fig The system menu options can be accessed by touching the touch sensitive holographic display Using the holographic touch screen in the airor on a known sanitized surface, allows the user to access the system and internet without having to touch surfaces that may contain bacteria and viruses, Voice And Facial Recognition Seat And Lid Description An alternate embodiment uses a computer The computer 98 is a net book portable computer 98, made by Samsung Company.

The computer 98 is programmed with voice recognition software The speech recognition or voice recognition software is made by Sensory Company. The computer is programmed to rotate the servo motor shaft 89 and exact number of degrees, and to stop the rotation when the servo shaft 89 has reached the desired shaft.

The computer 98 is programmed to turn off the servo motor 89 or servo motors 89, when the seat 47 and or lid 43 reach the up or down position. The rotational sensor 88 signals the computer 98, when the seat 47 or lid 43 reach the up or down resting position. In figs 7A and 7B, a ultrasonic motion detector sensor connects to the computer The distance at which the ultrasonic sensor 98 activates the system is 2 to 3 meters, and is adjustable, The ultrasonic sensorlooks out of the front of the housing through a transparent window or lensa position whose view is unobstructed, whether the seat 47 or lid 43 are in a down position or in a up position Fig 6B.

A web cam, video camera, or video cam 61 connects to the computer The web cam, looks out of the front of the housingwith an unobstructed view. A computer 98 reset button 67, on the right side on the housing 53 connects to the computer A low battery indicator light 65, connects to the motor controller 99, and computer 98 and is located on the right side 53 of the housing.

The motor controller 99 connects a rotating circuit 90 attached to a rotating shaft 79 of the lid 4. A wire 92 travels from the rotating circuit 90, through the inside of the shaft 79, through the inside of the lid 43, to 7 ultraviolet light emitting diodesattached to the underside of the lid 43 The lid 43 connects to the rotating shafts 79, which travels through a washer 85, a lubricated o ring 81, into the housing, through two plastic ball bearing supports 83, having a rotation sensor 88 attached to the shaft The lid 43 opposing shaft end 79 attached to the servo motor 89 is a similar fashion, as the seat shaft 79 connections.

The system menu can be accessed either, verbally through the water resistant microphone 54 and water resistant speaker 55, or visually, using a liquid crystal displaywith a menu control button. The menu control button also functions as a system on off button A secure digital card reader 73 connects to the computer The secure digital card reader 73 is located on the left side of the housing It has a water resistant rubber flap 75 that covers the access port.

Secure digital cards not shown with the user's music or sound effects can be placed into the reader The assembly connects to broadband internet, by using a Wireless Fidelity card, Wi-Fl card to wirelessly connect to the internet.

The assembly also uses a Ethernet connection to connect to the internet by wire, as shown in Fig 9B. The Ethernet wire not shownplugs into the assemblies Ethernet outlet The motors 89, plastic ball bearings 83, rotation sensor 88, rotating circuit 90, ultra sonic sensor, video cam 61, servo controller 99, microphone 54, secure digital card reader 73, Ethernet plugcomputer or microcontroller system reset button 67, low battery indicator light 65, ACDC plug 71, are attached to the housing ,as shown in Figs 3A,7A and 17B.

The system can also be activated, by a user's voice commands, if the system is in low power listening mode. The ultrasonic sensor is located in the front of the housinglooking out through a transparent window The ultra-Sonic's view is unobstructed weather the seat 47 and lid 43 are up or down. On initial contact the user hears an audible greeting of, nice day, or prompt of, what seat 47 position would you like.

The user's reply to prompts, is analyzed by the computer's speech recognition software 98, voice recognition software The system can respond to commands at distances up to 5 meters, or in high noise conditions.

The computer 98, uses two speech engines, speech recognition, which the computer 98 hears and understands human speech, and speech synthesis, where the computer 98 talks, as text to speech capabilities. Both speech engines used together, by the computer 98 Fig 9B to process a person's commands, e. The computer's 98 software follows steps in understanding the users vocalized desires, Step 1, voice activation, high noise immunity, low power, step 2, speech recognition and transcription, moderate noise immunity, high accuracy, step 3, intent and meaning, accurately understands users desires, step 4, data search and query, the system finds the right information, step 5, the system accurately interprets text data, system speech response, sounds like a human, system function response directs servo motors movements and system functions.

The user can choose from an audible menu of seat 47 and lid 43 position options. System menu functions can be accessed audibly using the microphone 54 and speaker 62, or visually using the liquid crystal displayas shown in Figs 6A, 6B and The user can move the seat 47 and or lid 43 by spoken words, of up or down, lid 43 up or lid 43 down, seat 47 up or seat 47 down, and stop. The user can chose from combinations of seat 47 positions, with seat 47 position down, resting on the toilet bowelor seat 47 up, just past vertical resting on the lid A combination with the lid 43 resting on the seat 47, or up resting against the toilet water tanktoilet hardware, or the assembly.

Other combinations are lid 43 up with seat 47 down, seat 47 up with lid 43 up, seat 47 down and lid 43 down. If the seat 47 and lid 43 are in a down position, the spoken word 'up' will move both seat 47 and lid 43 to the up position. Saying lid 47 up will move the lid 43 to the up position. Speaking the word down, will move the seat 47 and lid 43 to the down position.

Saying seat down, will move the seat 47 to the down position, with lid 43 staying in the up position The user can say stop while the seat 47 is moving, which stops the seat. The seat 47 and lid 43 positions can be moved manually, by hand in the usual manner.

In Figs 7A, 7B, the computer 98 signals the servo motor controller 99 to send a voltage to the servo motor 89 to rotate the motor shaft 89, an exact predetermined number of degrees, 95 degrees clockwise or counter clockwise. The servo motor shaft 89, is connected to the slip clutchthe slip clutch is connect to the rotating shaft 79 that's connected to the toilet seat 47, the rotation cause's the seat to pivot from horizontal, 90 degrees, to just past vertical, 05 degrees, or cycle in the other direction, from up to down.

The lid 43 uses the same motor movement system, as the seat The slip clutches exerts a minimal amount of force on a user, if they come in with contact the seat 47 or lid 43 while their moving. If the system activates while a user is on the seat 47 the computer 98 detects the seat 47 or lid 43 are in a stalled condition, by the increased motor voltage load, in Figs 7A, 7B, 9A and 9B. With movement stalled the computer 98 instructs the motor controller 99 to stop the seat 47 or lid's movement, and reverse their direction, then to stop their movement upon reaching their up or down resting place.

When the computer 98 receives signals that the seat 47 or lid 43 have reached their up or down position, the computer 98 signals the motor controller 99 to stop the motor 89 or motors 89, which stops the seat 47 or lids 43 movements. Both the seat 47 and lid 43 can move in tandem, or move independently of each other.

  • Face Tracking in Kinect For Windows 1.5
  • Coding4Fun Articles - Channel 9
  • NuiInitialize Failed

When the seat 47 and lid 43 are in different positions, seat 47 down and lid 43 up, the system carries out one function or movement at a time, to avoid the seat 47 and lid 43 acting against each other, such as the seat 47 moving up and the lid 43 moving down.

The speed that the seat 47 or lid 43 move up or down, cycles up or down, can be adjusted by the user, in the menu options, e. The volume of the music, greeting or sound effects, can be lowered by accessing the options menu, going to sound and music, going to volume, saying raise or lower. Conversing with the system may relax a person, and aid in elimination, which may help maladies, such as constipation. Conversing with the system may be fun and informative. Conversing with the system may, result in a positive bond developing for the user toward the system.

Music or sound effects may cover up the sound of elimination. The user may play music or sound effects through the speaker. The user may choose music or sound effects from the system menu, or access their own imported music.

A user may import music, by placing a secure digital card with the user's music stored on the card, in the assembly's secure digital card reader 75, abbreviation SD card reader.

The SD card reader 75 is connected to the computer 98, as shown in Fig 7B. A user can access the card reader on the left side of the housing The SD card reader 73 has a water resistant flap 75, covering the access port.

Check that the LEDs are still connected and none of the joints were broken when layers were removed from the template. Now drop one of the layers back into the template. What we now need to do is to place a second layer on top of the first so that the anodes of the layer in the template touch the anodes of the LEDs in the upper layer. Once in place, we need to solder the anodes together, and so need a way of supporting the top layer whilst connecting the LEDs.

A strip of cardboard can accomplish this. Cut the cardboard into strips making sure the cardboard is the height needed to support the layer and bind the strips together with tape. Two of these strips should be enough to support the layer. Now that you have the layer supported, solder the anode of each LED in the top layer to the anode of the corresponding LED in the layer directly beneath it.

Once this has been done, test each LED. Connect the cathode of the bottom layer to ground and touch each of the legs on the top layer in turn with the positive supply going through the current limiting resistor. The LED on the bottom layer should light up. Repeat for each LED in the layer.

Move the cathode to the top layer and repeat the test—this time the LED on the top layer should light. Again, repeat for each LED in the layer. Now add the remaining layers. Just for safety, test every layer in the cube as it is built up. This repeated testing sounds like a big pain, but trust me it's worth it.

At this point you will have a cube of LEDs looking something like this: Now trim the cathodes that are still flying off into space. Building the Controller Board The controller board will allow any of the LEDs in the cube to be turned on by a Netduino Mini using only seven pins. To light the entire cube, we need to switch on each layer in turn while doing so fast enough to give the impression of static image.

This is where the principle of Persistence of Vision comes into play. The basic algorithm is as follows: Our basic building blocks for the controller are as follows: The board has a series of eight cascaded 74HC registers. This allows us to control 64 LEDs 8 chips x 8 outputs.

The following schematic shows how two of these registers should be wired together: The above should be repeated until you have eight shift registers cascaded. The output from the register is 5V and will give more than enough power to burn out an LED, so we need to put a current limiting resistor in the circuit. Putting this together gives the following layout: Each socket holds a 74HC shift register. The connections are identical for each register with the data cascaded into the next register.

So if we look at the bottom left socket you will see the following: Below the socket there is a connector that allows the output to be connected to the LEDs in the cube. Above the connector are the resistors that limit the current flowing through the LEDs. The socket above that will hold the 74HC shift register. To the left of the socket is a nF capacitor. This smooths out the power supplied to the shift register and is connected between the power input and ground and placed as close to the IC as possible.

The following colors have been used: The data from the registers is cascaded and so we can store 64 bits of information 1 bit for each LED in a layer. Layer Switching The layer switching logic allows the controller to connect any one of the layers to ground using a common cathode. This is achieved by using a transistor as a switch. The TIP was selected because it is capable of sinking 2A. This means we may be drawing 1. If you use a different LED, you will need to verify that the shift registers, power supply, and the transistor are capable of dealing with the amount of power you will be drawing.

The schematic for the layer switching looks like this: The 74HC has three input signals. These represent a binary number The chip converts this number into eight output lines. The output from each line 0 through 7 is then fed in the base of a TIP transistor.

This turns on the appropriate layer by connecting the layer through to ground. One line for the Netduino Mini controls enable line on the 72HC chip. This allows us to turn all of the outputs off whilst a new value is being loaded into the chip.

The Completed Controller Board The completed controller board looks something like this: Note that there are a pair of connectors to the top right and bottom left of the Netduino Mini. The pair at the top right breakout the COM1 port. The two at the bottom left allow grounding of an FTDI lead connected to the controller board and also one socket that is not connected.

And on the underside: Connecting the Cube and Controller The final task from a hardware point of view is to connect the cube to the controller board. I tried both ribbon cable and alarm cable. The alarm cable was a little more difficult to connect but was flexible.

The ribbon cable was easier to work with but was not as flexible. The principle is the same whichever you chose. Place the cube on a flat surface the anodes touching the surface with one face of the cube facing you. The connections should be made so that the lower back left corner is co-ordinate 0, 0, 0.

The co-ordinates increase moving to the right, towards you and up. So looking at the controller board above, shift register 0 connects to the LEDs farthest away from you with output 0 from the register connecting to the LED to the far left. Cut and make the 8 cables according to this pattern varying the lengths to suit the location of the controller board with respect to the cube. Each cable will need a single eight-way socket on one end with the other end connected to the appropriate LED.

The layer selection logic should be connected using a similar cable. Each layer should be connected to a TIP with layer 0 being the bottom layer. The cube will then look like this: Connecting it up to the controller: I found it easier to be consistent and wire each plug and LED identically, and so all of the connections above have a black wire to the right of the connector.

It helps with connecting things up later. If we have everything connected then we only need one more thing Software The software running the cube needs to perform two main tasks: Work out which LEDs are turned on Run the display cube These two tasks need to be performed at the same time or so fast that they appear to run at the same time. NET Micro Framework has a built in mechanism to allow us to do this—threading.

Threading allows us to do this by running two tasks interleaved. So task 1 will run for a while, the system will then switch and run task 2 for a while, then back to task 1 and so on. To do this, the software is split into two parts, the main program that decides what to display and a separate class that runs the display. It is a relatively simple class containing the following methods: Constructor Buffer update method Display Buffer method The constructor sets everything up by instantiating an instance of the SPI class and setting the buffer which contains the data to be displayed to be empty, effectively clearing the cube: Each byte corresponds to a vertical layer in the cube.

The UpdateBuffer method allows the calling program to change what is displayed in the cube.