Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
9,700 | 9,700 | 15,298,125 | 2,653 | A method is disclosed of controlling a motor vehicle entertainment system based upon the occupancy of the motor vehicle. The occupancy of the motor vehicle is judged based upon inputs received from a number of seat sensors. The motor vehicle entertainment system includes a number of customised playlists based upon vehicle occupant identity. The motor vehicle entertainment system automatically selects a suitable playlist from the number of customised playlists based upon inferred identities of the occupants of the motor vehicle. | 1. A method of controlling a motor vehicle entertainment system comprising:
establishing an identity of one or more occupants of the vehicle; and automatically selecting an audio output based upon the identity of the vehicle occupants. 2. The method of claim 1, wherein selecting an audio output based upon the identity of the vehicle occupants comprises providing for one of the identified occupants of the vehicle a customized playlist specific to that occupant. 3. The method of claim 1, wherein selecting an audio output based upon the identity of the vehicle occupants comprises automatically selecting for one of the identified occupants of the motor vehicle a preferred radio station based upon previous use of the radio by the identified occupant and producing an audio output from the automatically selected radio station. 4. The method of claim 1 wherein, the method further comprises saving, using at least one memory device, for an identified occupant of the vehicle a customized playlist for the identified occupant. 5. The method of claim 1, wherein establishing the identity of one or more occupants of the vehicle comprises using at least one of a seat fore-aft position sensor and a seat mass sensor to establish characteristics of an occupant of a seat and saving, via at least one memory device, the characteristics of the occupant of the seat as an occupant identity reference. 6. The method of claim 5 further comprising selecting from a number of saved playlists a customized playlist based upon the established identity of the one or more occupants and providing the one or more occupants of the vehicle with the customized playlist. 7. The method of claim 1 further comprising using at least one of a seat fore-aft position sensor and a seat mass sensor to establish the identity of a driver of the vehicle, automatically selecting from a number of saved playlists a customized playlist based upon the identity of the driver and automatically providing an audio output from the automatically selected customized playlist. 8. The method of claim 1 further comprising using at least one of a seat fore-aft position sensor and a seat mass sensor to establish the identity of a driver of the vehicle and automatically selecting a driver preferred radio station based upon previous use of the radio by the driver and producing an audio output from the automatically selected radio station. 9. A vehicle entertainment system comprising:
an entertainment unit arranged to provide an audio output and including a programmable device arranged to establish, based upon one or more sensor inputs, the identity of an occupant and select an audio output based upon the identified occupant; and an interface operatively connected to the unit such that the interface is configured to play the selected audio output, provided to the interface by the unit. 10. The system of claim 9, wherein the sensor inputs include an input from a seat fore-aft position sensor and a seat mass sensor. 11. The system of claim 9, wherein the sensor inputs include an input from a seat fore-aft position sensor or a seat mass sensor. 12. The system of claim 9, wherein the sensor inputs are associated with a driver. 13. The system of claim 9, wherein the sensor inputs are associated with a passenger. 14. The system of claim 9, wherein a seat for a driver includes a seat fore-aft position sensor and a seat mass sensor and the identity of the driver is established based upon the outputs from the seat fore-aft position and mass sensors. 15. A vehicle comprising:
a seat having a sensor disposed within the seat and being configured to sense an occupant; and an entertainment system having a human machine interface and including a unit operatively connected to the sensor and an audio device such that in response to the sensor indicating a presence of the occupant, the unit provides an audio output to the human machine interface, wherein the audio output depends on an identity of the occupant established via the sensor. 16. The vehicle of claim 15, wherein the audio output comprises a customized playlist specific to the occupant. 17. The vehicle of claim 15, wherein the sensor is disposed within a seat of the vehicle and is configured to identify the occupant using a fore-aft seat position. 18. The vehicle of claim 15, wherein the sensor is disposed within a seat of the vehicle and is configured to identify the occupant using a mass of the occupant. 19. The vehicle of claim 15, wherein the audio output comprises a preferred radio station specific to the occupant. 20. The vehicle of claim 15, wherein the audio output comprises a number of customized playlists based on the identified occupant. | A method is disclosed of controlling a motor vehicle entertainment system based upon the occupancy of the motor vehicle. The occupancy of the motor vehicle is judged based upon inputs received from a number of seat sensors. The motor vehicle entertainment system includes a number of customised playlists based upon vehicle occupant identity. The motor vehicle entertainment system automatically selects a suitable playlist from the number of customised playlists based upon inferred identities of the occupants of the motor vehicle.1. A method of controlling a motor vehicle entertainment system comprising:
establishing an identity of one or more occupants of the vehicle; and automatically selecting an audio output based upon the identity of the vehicle occupants. 2. The method of claim 1, wherein selecting an audio output based upon the identity of the vehicle occupants comprises providing for one of the identified occupants of the vehicle a customized playlist specific to that occupant. 3. The method of claim 1, wherein selecting an audio output based upon the identity of the vehicle occupants comprises automatically selecting for one of the identified occupants of the motor vehicle a preferred radio station based upon previous use of the radio by the identified occupant and producing an audio output from the automatically selected radio station. 4. The method of claim 1 wherein, the method further comprises saving, using at least one memory device, for an identified occupant of the vehicle a customized playlist for the identified occupant. 5. The method of claim 1, wherein establishing the identity of one or more occupants of the vehicle comprises using at least one of a seat fore-aft position sensor and a seat mass sensor to establish characteristics of an occupant of a seat and saving, via at least one memory device, the characteristics of the occupant of the seat as an occupant identity reference. 6. The method of claim 5 further comprising selecting from a number of saved playlists a customized playlist based upon the established identity of the one or more occupants and providing the one or more occupants of the vehicle with the customized playlist. 7. The method of claim 1 further comprising using at least one of a seat fore-aft position sensor and a seat mass sensor to establish the identity of a driver of the vehicle, automatically selecting from a number of saved playlists a customized playlist based upon the identity of the driver and automatically providing an audio output from the automatically selected customized playlist. 8. The method of claim 1 further comprising using at least one of a seat fore-aft position sensor and a seat mass sensor to establish the identity of a driver of the vehicle and automatically selecting a driver preferred radio station based upon previous use of the radio by the driver and producing an audio output from the automatically selected radio station. 9. A vehicle entertainment system comprising:
an entertainment unit arranged to provide an audio output and including a programmable device arranged to establish, based upon one or more sensor inputs, the identity of an occupant and select an audio output based upon the identified occupant; and an interface operatively connected to the unit such that the interface is configured to play the selected audio output, provided to the interface by the unit. 10. The system of claim 9, wherein the sensor inputs include an input from a seat fore-aft position sensor and a seat mass sensor. 11. The system of claim 9, wherein the sensor inputs include an input from a seat fore-aft position sensor or a seat mass sensor. 12. The system of claim 9, wherein the sensor inputs are associated with a driver. 13. The system of claim 9, wherein the sensor inputs are associated with a passenger. 14. The system of claim 9, wherein a seat for a driver includes a seat fore-aft position sensor and a seat mass sensor and the identity of the driver is established based upon the outputs from the seat fore-aft position and mass sensors. 15. A vehicle comprising:
a seat having a sensor disposed within the seat and being configured to sense an occupant; and an entertainment system having a human machine interface and including a unit operatively connected to the sensor and an audio device such that in response to the sensor indicating a presence of the occupant, the unit provides an audio output to the human machine interface, wherein the audio output depends on an identity of the occupant established via the sensor. 16. The vehicle of claim 15, wherein the audio output comprises a customized playlist specific to the occupant. 17. The vehicle of claim 15, wherein the sensor is disposed within a seat of the vehicle and is configured to identify the occupant using a fore-aft seat position. 18. The vehicle of claim 15, wherein the sensor is disposed within a seat of the vehicle and is configured to identify the occupant using a mass of the occupant. 19. The vehicle of claim 15, wherein the audio output comprises a preferred radio station specific to the occupant. 20. The vehicle of claim 15, wherein the audio output comprises a number of customized playlists based on the identified occupant. | 2,600 |
9,701 | 9,701 | 14,496,934 | 2,613 | A data queuing and format apparatus is disclosed. A first selection circuit may be configured to selectively couple a first subset of data to a first plurality of data lines dependent upon control information, and a second selection circuit may be configured to selectively couple a second subset of data to a second plurality of data lines dependent upon the control information. A storage array may include multiple storage units, and each storage unit may be configured to receive data from one or more data lines of either the first or second plurality of data lines dependent upon the control information. | 1. An apparatus, comprising:
a first selection circuit configured to selectively couple data bits of a first subset of a plurality of data bits to respective data lines of a first plurality of data lines; a second selection circuit configured to selectively couple data bit of a second subset of the plurality of data bits to a respective data line of a second plurality of data lines; and a first storage array including a plurality of storage units, wherein each storage unit of the plurality of storage units is configured to selectively receive data from at least one data line of the first plurality of data lines or at least one data line of the second plurality of data lines. 2. The apparatus of claim 1, wherein the first selection circuit includes a plurality of multiplex circuits, wherein each multiplex circuit is configured to selectively couple a given data bit of the plurality of data bits to a respective data line of the first plurality of data lines. 3. The apparatus of claim 1, wherein the second selection circuits includes a plurality of multiplex circuits, wherein each multiplex circuit is configured to selectively coupled a given data bit of the plurality of data bits to a respective data line of the second plurality of data lines. 4. The apparatus of claim 1, wherein each storage unit of the plurality of storage units includes one or more multiplex circuits, wherein each multiplex circuit is configured to receive data from a given data line of the first plurality of data lines or a given data line of a second plurality of data lines. 5. The apparatus of claim 1, further comprising a second storage array configured to store control information. 6. The apparatus of claim 5, wherein the first multiplex circuit is configured to selectively couple the data bits of the first subset of the plurality of data bits to the respective data lines of a first plurality of data lines dependent upon at least a first portion of the control information, and wherein the second multiplex circuit is configured to selectively couple the data bits of the second subset of the plurality of data bits to the respective data lines of the second plurality of data lines dependent upon at least a second portion of the control information. 7. A method, comprising:
receiving control information and data from a memory, wherein the data includes a plurality of data bits; storing the control information in a first queue; selecting a subset of the plurality of data bits dependent upon the control information stored in the first queue; and storing the subset of the plurality of data bits into a second queue. 8. The method of claim 7, wherein the second queue includes a plurality of data storage units, wherein each data storage unit is coupled to at least one data line of a first plurality of data lines, and at least one data line of a second plurality of data lines. 9. The method of claim 8, wherein a given data line of the first plurality of data lines is orthogonal to a respective data line of the second plurality of data lines. 10. The method of claim 7, wherein selecting the subset of the plurality of data bits dependent upon the stored control information comprises coupling the data bits of the subset of the plurality of data bits to respective data lines of the first plurality of data lines. 11. The method of claim 8, wherein storing the subset of the plurality of data bits comprises storing one or more data bits of the subset of the plurality of data bits in a given one of the plurality data storage units. 12. The method of claim 11, wherein storing the one or more data bits of the subset of the plurality of data bits in the given one of the plurality of data storage units comprises selectively receiving data from either the at least one data line of the first plurality of data lines or the at least one data line of the second plurality of data lines. 13. The method of claim 7, wherein storing the control information comprises decoding the control information. 14. The method of claim 7, further comprising sending at least a portion of the stored subset of the plurality of data bits to at least one register. 15. A system, comprising:
a memory; and a graphics unit including a first functional unit and a second functional unit, wherein the graphics unit is configured to:
receive control information and data from the memory;
store the control information in a first queue;
select a portion of the data dependent upon the control information stored in the first queue; and
store the portion of the data in a second queue, wherein the second queue includes a plurality of storage units, and wherein each storage unit of the plurality of storage units is coupled to at least one data line of a first plurality of data lines and at least one data line of a second plurality of data lines. 16. The system of claim 15, wherein to select the portion of the data dependent upon the control information, the graphics unit is further configured to couple data bits of a plurality of data bits of the portion of the data to respective data lines of a first plurality of data lines. 17. The system of claim 16, wherein to store the portion of the data in the second queue, the graphics unit is further configured to store the portion of the data into a subset of a plurality of storage units in the second queue dependent upon the control information. 18. The system of claim 16, wherein the second portion of the data includes a plurality of data bits, and wherein to store the portion of the data in the second queue, the graphics unit is further configured to store a subset of the plurality of data bits into a given one of a plurality of storage unit in the second queue. 19. The system of claim 18, wherein to store the subset of the plurality of data bits into the given one of the plurality of storage units in the second queue, the graphics unit is further configured to couple the given one of the plurality of storage units to at least one data line of the first plurality of data lines. 20. The system of claim 16, wherein each data line of the first plurality of data lines is orthogonal to a respective data line of the second plurality of data lines. | A data queuing and format apparatus is disclosed. A first selection circuit may be configured to selectively couple a first subset of data to a first plurality of data lines dependent upon control information, and a second selection circuit may be configured to selectively couple a second subset of data to a second plurality of data lines dependent upon the control information. A storage array may include multiple storage units, and each storage unit may be configured to receive data from one or more data lines of either the first or second plurality of data lines dependent upon the control information.1. An apparatus, comprising:
a first selection circuit configured to selectively couple data bits of a first subset of a plurality of data bits to respective data lines of a first plurality of data lines; a second selection circuit configured to selectively couple data bit of a second subset of the plurality of data bits to a respective data line of a second plurality of data lines; and a first storage array including a plurality of storage units, wherein each storage unit of the plurality of storage units is configured to selectively receive data from at least one data line of the first plurality of data lines or at least one data line of the second plurality of data lines. 2. The apparatus of claim 1, wherein the first selection circuit includes a plurality of multiplex circuits, wherein each multiplex circuit is configured to selectively couple a given data bit of the plurality of data bits to a respective data line of the first plurality of data lines. 3. The apparatus of claim 1, wherein the second selection circuits includes a plurality of multiplex circuits, wherein each multiplex circuit is configured to selectively coupled a given data bit of the plurality of data bits to a respective data line of the second plurality of data lines. 4. The apparatus of claim 1, wherein each storage unit of the plurality of storage units includes one or more multiplex circuits, wherein each multiplex circuit is configured to receive data from a given data line of the first plurality of data lines or a given data line of a second plurality of data lines. 5. The apparatus of claim 1, further comprising a second storage array configured to store control information. 6. The apparatus of claim 5, wherein the first multiplex circuit is configured to selectively couple the data bits of the first subset of the plurality of data bits to the respective data lines of a first plurality of data lines dependent upon at least a first portion of the control information, and wherein the second multiplex circuit is configured to selectively couple the data bits of the second subset of the plurality of data bits to the respective data lines of the second plurality of data lines dependent upon at least a second portion of the control information. 7. A method, comprising:
receiving control information and data from a memory, wherein the data includes a plurality of data bits; storing the control information in a first queue; selecting a subset of the plurality of data bits dependent upon the control information stored in the first queue; and storing the subset of the plurality of data bits into a second queue. 8. The method of claim 7, wherein the second queue includes a plurality of data storage units, wherein each data storage unit is coupled to at least one data line of a first plurality of data lines, and at least one data line of a second plurality of data lines. 9. The method of claim 8, wherein a given data line of the first plurality of data lines is orthogonal to a respective data line of the second plurality of data lines. 10. The method of claim 7, wherein selecting the subset of the plurality of data bits dependent upon the stored control information comprises coupling the data bits of the subset of the plurality of data bits to respective data lines of the first plurality of data lines. 11. The method of claim 8, wherein storing the subset of the plurality of data bits comprises storing one or more data bits of the subset of the plurality of data bits in a given one of the plurality data storage units. 12. The method of claim 11, wherein storing the one or more data bits of the subset of the plurality of data bits in the given one of the plurality of data storage units comprises selectively receiving data from either the at least one data line of the first plurality of data lines or the at least one data line of the second plurality of data lines. 13. The method of claim 7, wherein storing the control information comprises decoding the control information. 14. The method of claim 7, further comprising sending at least a portion of the stored subset of the plurality of data bits to at least one register. 15. A system, comprising:
a memory; and a graphics unit including a first functional unit and a second functional unit, wherein the graphics unit is configured to:
receive control information and data from the memory;
store the control information in a first queue;
select a portion of the data dependent upon the control information stored in the first queue; and
store the portion of the data in a second queue, wherein the second queue includes a plurality of storage units, and wherein each storage unit of the plurality of storage units is coupled to at least one data line of a first plurality of data lines and at least one data line of a second plurality of data lines. 16. The system of claim 15, wherein to select the portion of the data dependent upon the control information, the graphics unit is further configured to couple data bits of a plurality of data bits of the portion of the data to respective data lines of a first plurality of data lines. 17. The system of claim 16, wherein to store the portion of the data in the second queue, the graphics unit is further configured to store the portion of the data into a subset of a plurality of storage units in the second queue dependent upon the control information. 18. The system of claim 16, wherein the second portion of the data includes a plurality of data bits, and wherein to store the portion of the data in the second queue, the graphics unit is further configured to store a subset of the plurality of data bits into a given one of a plurality of storage unit in the second queue. 19. The system of claim 18, wherein to store the subset of the plurality of data bits into the given one of the plurality of storage units in the second queue, the graphics unit is further configured to couple the given one of the plurality of storage units to at least one data line of the first plurality of data lines. 20. The system of claim 16, wherein each data line of the first plurality of data lines is orthogonal to a respective data line of the second plurality of data lines. | 2,600 |
9,702 | 9,702 | 14,939,324 | 2,612 | An automated personnel identification system. The system includes a portable communications device that stores an identifier, which uniquely identifies the portable communications device, a user of the portable communications device, or both. The system also includes a garment. The garment includes a communications interface, a light source, and an electronic controller electrically coupled to the communications interface and to the light source. The electronic controller is configured to receive the identifier, via the communications interface, from the portable communications device. The electronic controller is further configured to cause the light source to generate a modulated optical output based on the identifier. In some embodiments, the electronic controller is further configured to receive a status indication from the portable communications device via the communications interface, and activate the light source based on the status indication. | 1. An automated personnel identification system, the system comprising:
a portable communications device storing an identifier uniquely identifying at least one of a group consisting of the portable communications device and a user of the portable communications device; a garment including
a communications interface,
a light source,
an electronic controller electrically coupled to the communications interface and to the light source, the electronic controller configured to
receive the identifier, via the communications interface, from the portable communications device; and
cause the light source to generate a modulated optical output based on the identifier. 2. The personnel identification system of claim 1, wherein the electronic controller is further configured to
receive a status indication from the portable communications device via the communications interface; and activate the light source based on the status indication. 3. The personnel identification system of claim 2, wherein the status indication is one selected from the group consisting of a receive status and a transmit status. 4. The personnel identification system of claim 1, wherein the electronic controller is further configured to activate the light source intermittently. 5. The personnel identification system of claim 1, wherein the electronic controller is further configured to activate the light source when the electronic controller receives an output of a user interface device. 6. The personnel identification system of claim 1, wherein the light source is an infrared light source and the modulated optical output is in the infrared spectrum. 7. The personnel identification system of claim 1, wherein the light source is an visible light source and the modulated optical output is in the visible spectrum. 8. The personnel identification system of claim 1, further comprising:
a camera configured to capture an image of the garment and at least a portion of the modulated optical output; and a display processor electrically coupled to the camera and configured to
receive the image,
determine the identifier from the modulated optical output,
generate an overlay image based on the image and the identifier, and
receive, via a graphical user interface, a command based on the overlay image. 9. The personnel identification system of claim 8, wherein the command is at least one selected from the group consisting of a push-to-talk request, a talk group configuration request, a channel configuration request, and an information request. 10. A method for operating a personnel identification system that includes a portable communications device and a garment, the method comprising:
storing, by the portable communications device, an identifier associated with a user; receiving, by an electronic controller of the garment, the identifier from the portable communications device; and causing, by the electronic controller, a light source of the garment to generate a modulated optical output based on the identifier. 11. The method of claim 10, further comprising:
receiving, at communications interface of the garment, a status indication from the portable communications device; and activating, by the electronic controller, the light source based on the status indication. 12. The method of claim 11, wherein receiving the status indication includes receiving one selected from the group consisting of a receive status and a transmit status. 13. The method of claim 10, further comprising:
activating, by the electronic controller, the light source intermittently. 14. The method of claim 10, further comprising:
activating the light source, by the electronic controller, when the electronic controller receives an output of a user interface device. 15. The method of claim 10, wherein causing the light source to generate a modulated output includes causing the light source to generate a modulated optical output in the infrared spectrum. 16. The method of claim 10, wherein causing the light source to generate a modulated output includes causing the light source to generate a modulated optical output in the visible spectrum. 17. The method of claim 10, further comprising:
capturing, by a camera configured, an image of the garment and at least a portion of the modulated optical output; and receiving, by a display processor electrically coupled to the camera, the image, determining, by the display processor, the identifier from the modulated optical output, generating, by the display processor, an overlay image based on the image and the identifier, and receiving, by a display processor via a graphical user interface, a command based on the overlay image. 18. The method of claim 17, wherein receiving the command based on the overlay image includes receiving at least one selected from the group consisting of a push-to-talk request, a talk group configuration request, a channel configuration request, and an information request. | An automated personnel identification system. The system includes a portable communications device that stores an identifier, which uniquely identifies the portable communications device, a user of the portable communications device, or both. The system also includes a garment. The garment includes a communications interface, a light source, and an electronic controller electrically coupled to the communications interface and to the light source. The electronic controller is configured to receive the identifier, via the communications interface, from the portable communications device. The electronic controller is further configured to cause the light source to generate a modulated optical output based on the identifier. In some embodiments, the electronic controller is further configured to receive a status indication from the portable communications device via the communications interface, and activate the light source based on the status indication.1. An automated personnel identification system, the system comprising:
a portable communications device storing an identifier uniquely identifying at least one of a group consisting of the portable communications device and a user of the portable communications device; a garment including
a communications interface,
a light source,
an electronic controller electrically coupled to the communications interface and to the light source, the electronic controller configured to
receive the identifier, via the communications interface, from the portable communications device; and
cause the light source to generate a modulated optical output based on the identifier. 2. The personnel identification system of claim 1, wherein the electronic controller is further configured to
receive a status indication from the portable communications device via the communications interface; and activate the light source based on the status indication. 3. The personnel identification system of claim 2, wherein the status indication is one selected from the group consisting of a receive status and a transmit status. 4. The personnel identification system of claim 1, wherein the electronic controller is further configured to activate the light source intermittently. 5. The personnel identification system of claim 1, wherein the electronic controller is further configured to activate the light source when the electronic controller receives an output of a user interface device. 6. The personnel identification system of claim 1, wherein the light source is an infrared light source and the modulated optical output is in the infrared spectrum. 7. The personnel identification system of claim 1, wherein the light source is an visible light source and the modulated optical output is in the visible spectrum. 8. The personnel identification system of claim 1, further comprising:
a camera configured to capture an image of the garment and at least a portion of the modulated optical output; and a display processor electrically coupled to the camera and configured to
receive the image,
determine the identifier from the modulated optical output,
generate an overlay image based on the image and the identifier, and
receive, via a graphical user interface, a command based on the overlay image. 9. The personnel identification system of claim 8, wherein the command is at least one selected from the group consisting of a push-to-talk request, a talk group configuration request, a channel configuration request, and an information request. 10. A method for operating a personnel identification system that includes a portable communications device and a garment, the method comprising:
storing, by the portable communications device, an identifier associated with a user; receiving, by an electronic controller of the garment, the identifier from the portable communications device; and causing, by the electronic controller, a light source of the garment to generate a modulated optical output based on the identifier. 11. The method of claim 10, further comprising:
receiving, at communications interface of the garment, a status indication from the portable communications device; and activating, by the electronic controller, the light source based on the status indication. 12. The method of claim 11, wherein receiving the status indication includes receiving one selected from the group consisting of a receive status and a transmit status. 13. The method of claim 10, further comprising:
activating, by the electronic controller, the light source intermittently. 14. The method of claim 10, further comprising:
activating the light source, by the electronic controller, when the electronic controller receives an output of a user interface device. 15. The method of claim 10, wherein causing the light source to generate a modulated output includes causing the light source to generate a modulated optical output in the infrared spectrum. 16. The method of claim 10, wherein causing the light source to generate a modulated output includes causing the light source to generate a modulated optical output in the visible spectrum. 17. The method of claim 10, further comprising:
capturing, by a camera configured, an image of the garment and at least a portion of the modulated optical output; and receiving, by a display processor electrically coupled to the camera, the image, determining, by the display processor, the identifier from the modulated optical output, generating, by the display processor, an overlay image based on the image and the identifier, and receiving, by a display processor via a graphical user interface, a command based on the overlay image. 18. The method of claim 17, wherein receiving the command based on the overlay image includes receiving at least one selected from the group consisting of a push-to-talk request, a talk group configuration request, a channel configuration request, and an information request. | 2,600 |
9,703 | 9,703 | 14,052,014 | 2,644 | A vehicle computer system (VCS) configured to communicate with one or more mobile devices, comprising a first processor configured to automatically create a priority list by utilizing one or more factors associated with communication activity between the vehicle computer system and the one or more mobile devices, wherein the priority list includes instructions to pair one or more mobile devices with the VCS and indicates the order of connecting to the one or more mobile devices. The vehicle computer system also includes a wireless transceiver including a second processor configured to receive the priority list from the first processor and establish a wireless connection with one or more mobile devices based on the priority list. | 1. A vehicle computer system (VCS) configured to communicate with one or more mobile devices, comprising:
a first processor configured to automatically create a priority list by utilizing one or more factors associated with communication activity between the vehicle computer system and the one or more mobile devices, wherein the priority list includes instructions to pair one or more mobile devices with the VCS and indicates an order of connecting to the one or more mobile devices; and a wireless transceiver including a second processor configured to:
receive the priority list from the first processor;
establish a wireless connection with one or more mobile devices based on the priority list. 2. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the priority list further indicates a number of attempts to establish a connection with the one or more mobile device before timing out. 3. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a last connected device. 4. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a second to last connected device. 5. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are connected most frequently during a time period. 6. The vehicle computer system configured to communicate with a mobile device of claim 5, wherein the time period is one week. 7. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific day of the week. 8. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific time of day. 9. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the second processor is a base-band processor. 10. A vehicle computer system (VCS), comprising:
a processor configured to:
automatically create a priority list by utilizing one or more factors associated with wireless pairing activity between the VCS and mobile-devices, wherein the priority list indicates an order of connecting to the mobile-devices and a number of attempts for each of the mobile-devices to connect with the VCS before timing out; and
connect with a first mobile-device based on the priority list. 11. The vehicle computer system of claim 10, wherein the processor is further configured to:
detect a disconnect between the first mobile-device and the VCS; connect to a second mobile-device based on the priority list; attempt to connect to the first mobile-device for a specific time period, while connected to the second mobile-device; and disconnect the second mobile-device upon reconnecting with the first mobile-device. 12. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific day of the week. 13. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are a favorite mobile device. 14. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are a last connected device. 15. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are connected most frequently during a time period. 16. A vehicle computer system (VCS) configured to communicate with one or more mobile devices, comprising:
a processor configured to automatically create a priority list by utilizing one or more factors associated with communication activity between the vehicle computer system and the one or more mobile devices, wherein the priority list includes instructions to pair one or more mobile devices with the VCS and indicates an order of connecting to the one or more mobile devices; and establish a wireless connection with one or more mobile devices based on the priority list. 17. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are a last connected device. 18. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are a second to last connected device. 19. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are connected most frequently during a time period. 20. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific day of the week. | A vehicle computer system (VCS) configured to communicate with one or more mobile devices, comprising a first processor configured to automatically create a priority list by utilizing one or more factors associated with communication activity between the vehicle computer system and the one or more mobile devices, wherein the priority list includes instructions to pair one or more mobile devices with the VCS and indicates the order of connecting to the one or more mobile devices. The vehicle computer system also includes a wireless transceiver including a second processor configured to receive the priority list from the first processor and establish a wireless connection with one or more mobile devices based on the priority list.1. A vehicle computer system (VCS) configured to communicate with one or more mobile devices, comprising:
a first processor configured to automatically create a priority list by utilizing one or more factors associated with communication activity between the vehicle computer system and the one or more mobile devices, wherein the priority list includes instructions to pair one or more mobile devices with the VCS and indicates an order of connecting to the one or more mobile devices; and a wireless transceiver including a second processor configured to:
receive the priority list from the first processor;
establish a wireless connection with one or more mobile devices based on the priority list. 2. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the priority list further indicates a number of attempts to establish a connection with the one or more mobile device before timing out. 3. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a last connected device. 4. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a second to last connected device. 5. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are connected most frequently during a time period. 6. The vehicle computer system configured to communicate with a mobile device of claim 5, wherein the time period is one week. 7. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific day of the week. 8. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific time of day. 9. The vehicle computer system configured to communicate with a mobile device of claim 1, wherein the second processor is a base-band processor. 10. A vehicle computer system (VCS), comprising:
a processor configured to:
automatically create a priority list by utilizing one or more factors associated with wireless pairing activity between the VCS and mobile-devices, wherein the priority list indicates an order of connecting to the mobile-devices and a number of attempts for each of the mobile-devices to connect with the VCS before timing out; and
connect with a first mobile-device based on the priority list. 11. The vehicle computer system of claim 10, wherein the processor is further configured to:
detect a disconnect between the first mobile-device and the VCS; connect to a second mobile-device based on the priority list; attempt to connect to the first mobile-device for a specific time period, while connected to the second mobile-device; and disconnect the second mobile-device upon reconnecting with the first mobile-device. 12. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific day of the week. 13. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are a favorite mobile device. 14. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are a last connected device. 15. The VCS of claim 10, wherein the one or more factors include whether the one or more mobile devices are connected most frequently during a time period. 16. A vehicle computer system (VCS) configured to communicate with one or more mobile devices, comprising:
a processor configured to automatically create a priority list by utilizing one or more factors associated with communication activity between the vehicle computer system and the one or more mobile devices, wherein the priority list includes instructions to pair one or more mobile devices with the VCS and indicates an order of connecting to the one or more mobile devices; and establish a wireless connection with one or more mobile devices based on the priority list. 17. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are a last connected device. 18. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are a second to last connected device. 19. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are connected most frequently during a time period. 20. The vehicle computer system of claim 16, wherein the one or more factors include whether the one or more mobile devices are a mobile device connected most during a specific day of the week. | 2,600 |
9,704 | 9,704 | 15,014,183 | 2,651 | Presented herein are bone conduction devices having housings that are complementary to the recipient's maxillary alveolar process such that the maxillary alveolar process supports the housing within the recipient's mouth. | 1. A bone conduction system, comprising:
a housing having a surface that is complementary to an outer surface of a recipient's maxillary alveolar process such that the maxillary alveolar process supports the housing within the recipient's mouth; and a transducer disposed in the housing configured to deliver mechanical output forces to the recipient so as to evoke a hearing percept of a sound signal. 2. The bone conduction system of claim 1, wherein the front surface includes an elongate cavity configured to mate with a ridge of the maxillary alveolar process. 3. The bone conduction system of claim 1, wherein a surface of the housing is textured to facilitate friction between the housing and the recipient's soft tissue. 4. The bone conduction system of claim 3, wherein the surface is textured to include a plurality of recesses. 5. The bone conduction system of claim 4, wherein the recesses comprise a plurality of elongate grooves and wherein the surface includes a plurality of elongate ridges. 6. The bone conduction system of claim 4, wherein the recesses are pores having irregular shapes. 7. The bone conduction system of claim 4, wherein the recesses are a plurality of depressions and wherein the surface includes a plurality of protrusions. 8. The bone conduction system of claim 1, further comprising:
a receiver disposed in the housing; and a power source disposed in the housing configured to provide power to the receiver and the transducer. 9. The bone conduction system of claim 8, further comprising:
an external sound processing unit that includes:
one or more sound input elements configured to generate electrical signals based on received sound signals;
a sound processor configured to process the electrical signals to generate processed signals representative of the sound signals; and
a transmitter configured to wirelessly transmit the processed signals to the receiver. 10. The bone conduction system of claim 8, further comprising:
one or more sound input elements disposed in the housing and configured to generate electrical signals based on received sound signals; and a sound processor disposed in the housing configured to process the electrical signals to generate processed signals representative of the sound signals. 11. The bone conduction system of claim 1, further comprising:
an implantable magnet configured to be implanted adjacent to the maxillary alveolar process; and a magnet disposed in or on the housing and configured to be magnetically coupled to the implantable magnet. 12. The bone conduction system of claim 1, wherein the housing includes a housing portion that is vibrationally isolated from a remainder of the housing via an isolation mechanism, and wherein the transducer is mechanically coupled to the housing portion. 13. A bone conduction device, comprising:
a housing configured to be positioned in a recipient's mouth between the recipient's tissue and gums and retained in the mouth due to inward pressure applied by at least one of the tissue or a lip of the recipient; and a transducer disposed in the housing configured to deliver mechanical output forces to the recipient so as to evoke a hearing percept of a sound signal. 14. The bone conduction device of claim 13, wherein the housing has a front surface with a shape that is complementary to an outer surface of the recipient's maxillary alveolar process such that the maxillary alveolar process supports the housing within the mouth. 15. The bone conduction device of claim 14, wherein the front surface includes an elongate cavity configured to mate with a ridge of the maxillary alveolar process. 16. The bone conduction device of claim 13, wherein the housing has a front surface with a shape that is complementary to an outer surface of the recipient's mandibular alveolar process such that the mandibular alveolar process supports the housing within the mouth. 17. The bone conduction device of claim 13, wherein the housing includes a surface that is textured to facilitate friction between the surface of the housing and the recipient's soft tissue. 18. The bone conduction device of claim 17, wherein the surface is textured to include a plurality of recesses. 19. The bone conduction device of claim 13, further comprising:
a receiver disposed in the housing; and a power source disposed in the housing configured to provide power to the receiver and the transducer. 20. The bone conduction device of claim 19, further comprising:
one or more sound input elements disposed in the housing and configured to generate electrical signals based on received sound signals; and a sound processor disposed in the housing configured to process the electrical signals to generate processed signals representative of the sound signals. | Presented herein are bone conduction devices having housings that are complementary to the recipient's maxillary alveolar process such that the maxillary alveolar process supports the housing within the recipient's mouth.1. A bone conduction system, comprising:
a housing having a surface that is complementary to an outer surface of a recipient's maxillary alveolar process such that the maxillary alveolar process supports the housing within the recipient's mouth; and a transducer disposed in the housing configured to deliver mechanical output forces to the recipient so as to evoke a hearing percept of a sound signal. 2. The bone conduction system of claim 1, wherein the front surface includes an elongate cavity configured to mate with a ridge of the maxillary alveolar process. 3. The bone conduction system of claim 1, wherein a surface of the housing is textured to facilitate friction between the housing and the recipient's soft tissue. 4. The bone conduction system of claim 3, wherein the surface is textured to include a plurality of recesses. 5. The bone conduction system of claim 4, wherein the recesses comprise a plurality of elongate grooves and wherein the surface includes a plurality of elongate ridges. 6. The bone conduction system of claim 4, wherein the recesses are pores having irregular shapes. 7. The bone conduction system of claim 4, wherein the recesses are a plurality of depressions and wherein the surface includes a plurality of protrusions. 8. The bone conduction system of claim 1, further comprising:
a receiver disposed in the housing; and a power source disposed in the housing configured to provide power to the receiver and the transducer. 9. The bone conduction system of claim 8, further comprising:
an external sound processing unit that includes:
one or more sound input elements configured to generate electrical signals based on received sound signals;
a sound processor configured to process the electrical signals to generate processed signals representative of the sound signals; and
a transmitter configured to wirelessly transmit the processed signals to the receiver. 10. The bone conduction system of claim 8, further comprising:
one or more sound input elements disposed in the housing and configured to generate electrical signals based on received sound signals; and a sound processor disposed in the housing configured to process the electrical signals to generate processed signals representative of the sound signals. 11. The bone conduction system of claim 1, further comprising:
an implantable magnet configured to be implanted adjacent to the maxillary alveolar process; and a magnet disposed in or on the housing and configured to be magnetically coupled to the implantable magnet. 12. The bone conduction system of claim 1, wherein the housing includes a housing portion that is vibrationally isolated from a remainder of the housing via an isolation mechanism, and wherein the transducer is mechanically coupled to the housing portion. 13. A bone conduction device, comprising:
a housing configured to be positioned in a recipient's mouth between the recipient's tissue and gums and retained in the mouth due to inward pressure applied by at least one of the tissue or a lip of the recipient; and a transducer disposed in the housing configured to deliver mechanical output forces to the recipient so as to evoke a hearing percept of a sound signal. 14. The bone conduction device of claim 13, wherein the housing has a front surface with a shape that is complementary to an outer surface of the recipient's maxillary alveolar process such that the maxillary alveolar process supports the housing within the mouth. 15. The bone conduction device of claim 14, wherein the front surface includes an elongate cavity configured to mate with a ridge of the maxillary alveolar process. 16. The bone conduction device of claim 13, wherein the housing has a front surface with a shape that is complementary to an outer surface of the recipient's mandibular alveolar process such that the mandibular alveolar process supports the housing within the mouth. 17. The bone conduction device of claim 13, wherein the housing includes a surface that is textured to facilitate friction between the surface of the housing and the recipient's soft tissue. 18. The bone conduction device of claim 17, wherein the surface is textured to include a plurality of recesses. 19. The bone conduction device of claim 13, further comprising:
a receiver disposed in the housing; and a power source disposed in the housing configured to provide power to the receiver and the transducer. 20. The bone conduction device of claim 19, further comprising:
one or more sound input elements disposed in the housing and configured to generate electrical signals based on received sound signals; and a sound processor disposed in the housing configured to process the electrical signals to generate processed signals representative of the sound signals. | 2,600 |
9,705 | 9,705 | 12,051,708 | 2,685 | The present invention provides a trading keyboard that can be configured both physically and functionally according to a user's preferences. The trading keyboard preferably includes self-identifying key covers that can be physically arranged on any of the keyboard's key bases. Detection mechanisms included in the key bases detect the commands of the trading application associated with each self-identifying key cover. Therefore, the user may reposition the key covers on the keyboard according to the user's preferences, and yet retain the same functionality for the key covers. The user may also switch between keyboard modes that allow the keyboard to be functionally reconfigured. By selecting different modes, the user can chose between different keyboard mapping configurations that assign the functions of the trading application to the keys in different arrangements. The mode selection mechanism may also be used to select between different commands associated with a single key or key cover. | 1. A keyboard comprising:
at least one numeric key that when struck transmits one of a plurality of signals to an application, wherein a first of the plurality of signals corresponds to a first number associated with the at least one numeric key and a second of the plurality of signals corresponds to a second number associated with at least one numeric key; and a mode selection key selectable for selecting a state of the keyboard from a plurality of states, wherein in a first state, the at least one numeric key when struck transmits the first signal to the application and in a second state, the at least one numeric key when struck transmits the second signal to the application. 2. The keyboard of claim 1, wherein the second number is a predetermined multiple of the first number. 3. The keyboard of claim 2, wherein the second number is a predetermined multiple of at least 10 times the first number. 4. The keyboard of claim 2, wherein the second number is a predetermined multiple of at least 100 times the first number. 5. The keyboard of claim 2, comprising a plurality of key bases configured to receive a key cover associated with the at least one numerical key, the key cover comprising a mechanism for identifying the key cover, each of the plurality of key bases comprising a mechanism for detecting the mechanism for identifying the key cover, wherein each of the plurality of key bases is configured to generate the plurality of signals associated with the numeric key. 6. The keyboard of claim 2, comprising a plurality of key bases configured to receive a key cover associated with the at least one numerical key, and a mechanism that alternatively disables or enables at least one of the plurality of key bases. | The present invention provides a trading keyboard that can be configured both physically and functionally according to a user's preferences. The trading keyboard preferably includes self-identifying key covers that can be physically arranged on any of the keyboard's key bases. Detection mechanisms included in the key bases detect the commands of the trading application associated with each self-identifying key cover. Therefore, the user may reposition the key covers on the keyboard according to the user's preferences, and yet retain the same functionality for the key covers. The user may also switch between keyboard modes that allow the keyboard to be functionally reconfigured. By selecting different modes, the user can chose between different keyboard mapping configurations that assign the functions of the trading application to the keys in different arrangements. The mode selection mechanism may also be used to select between different commands associated with a single key or key cover.1. A keyboard comprising:
at least one numeric key that when struck transmits one of a plurality of signals to an application, wherein a first of the plurality of signals corresponds to a first number associated with the at least one numeric key and a second of the plurality of signals corresponds to a second number associated with at least one numeric key; and a mode selection key selectable for selecting a state of the keyboard from a plurality of states, wherein in a first state, the at least one numeric key when struck transmits the first signal to the application and in a second state, the at least one numeric key when struck transmits the second signal to the application. 2. The keyboard of claim 1, wherein the second number is a predetermined multiple of the first number. 3. The keyboard of claim 2, wherein the second number is a predetermined multiple of at least 10 times the first number. 4. The keyboard of claim 2, wherein the second number is a predetermined multiple of at least 100 times the first number. 5. The keyboard of claim 2, comprising a plurality of key bases configured to receive a key cover associated with the at least one numerical key, the key cover comprising a mechanism for identifying the key cover, each of the plurality of key bases comprising a mechanism for detecting the mechanism for identifying the key cover, wherein each of the plurality of key bases is configured to generate the plurality of signals associated with the numeric key. 6. The keyboard of claim 2, comprising a plurality of key bases configured to receive a key cover associated with the at least one numerical key, and a mechanism that alternatively disables or enables at least one of the plurality of key bases. | 2,600 |
9,706 | 9,706 | 13,191,364 | 2,616 | A technique is disclosed for a graphics processing unit (GPU) to enter and exit a power saving deep sleep mode. The technique involves preserving processing state within local memory by configuring the local memory to operate in a self-refresh mode while the GPU is powered off for deep sleep. An interface circuit coupled to the local memory is configured to prevent spurious GPU signals from disrupting proper self-refresh of the local memory. Spurious GPU signals may result from GPU power down and GPU power up events associated with the GPU entering and exiting the deep sleep mode. | 1. A method implemented by a graphics processing unit (GPU) for entering and exiting sleep mode, the method comprising:
receiving a command to enter a sleep mode; saving internal processing state for the GPU to a memory system local to the GPU; causing at least one memory device included in the memory system to enter a self-refresh mode; and entering a power-down state. 2. The method of claim 1, further comprising stalling all incoming workloads and completing all in-progress processing in order to halt processing within the GPU. 3. The method of claim 1, wherein the at least one memory device comprises a dynamic random access memory (DRAM) device, and the step of saving comprises copying the internal processing state for the GPU to the DRAM device. 4. The method of claim 1, wherein the at least one memory device comprises a non-volatile memory, and the step of saving comprises copying the internal processing state for the GPU and a memory interface state to the non-volatile memory. 5. The method of claim 4, wherein the non-volatile memory comprises at least one flash memory device coupled to the GPU via an interface that is independent of an interface to a different memory device within the memory system. 6. The method of claim 1, wherein the GPU and memory system are configured to operate at a predetermined speed and operating state prior to entering the power-down state. 7. The method of claim 1, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to entering the power-down state. 8. The method of claim 1, further comprising:
entering a power-on state; determining that the power-on state is associated with the sleep mode; causing the at least one memory device included in the memory system to exit the self-refresh mode; reloading the internal processing state for the GPU from the memory system to the GPU. 9. The method of claim 8, further comprising performing a power-on reset of the GPU and detecting a warm-boot state for transitioning the GPU from the power-down state to the power-on state. 10. The method of claim 8, wherein the at least one memory device comprises a DRAM device, and the step of reloading comprises copying the internal processing state for the GPU from the DRAM device to the GPU. 11. The method of claim 10, wherein the at least one memory device comprises a non-volatile memory device, and the step of reloading comprises copying the internal processing state for the GPU and a DRAM interface state from the non-volatile memory device to the GPU. 12. The method of claim 11, wherein the non-volatile memory device comprises at least one flash memory device, coupled to the GPU via an interface that is independent of the DRAM device. 13. The method of claim 8, wherein the GPU and the memory system are configured to operate at a predetermined speed and operating state prior to entering the power-on state. 14. The method of claim 8, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to exiting the power-on state. 15. The method of claim 11, wherein a portion of the internal processing state for the GPU and the DRAM interface state are restored to a previously configured state based on data stored within the non-volatile memory. 16. A computer-readable storage medium including instructions that, when executed by a graphics processing unit (GPU), cause the GPU to enter and exit a sleep mode, by performing the steps of:
receiving a command to enter a sleep mode; saving internal processing state for the GPU to a memory system local to the GPU; causing at least one memory device included in the memory system to enter a self-refresh mode; and entering a power-down state. 17. The computer-readable storage medium of claim 16, further comprising stalling all incoming workloads and completing all in-progress processing in order to halt processing within the GPU. 18. The computer-readable storage medium of claim 16, wherein the at least one memory device comprises a dynamic random access memory (DRAM) device, and the step of saving comprises copying the internal processing state for the GPU to the DRAM device. 19. The computer-readable storage medium of claim 16, wherein the at least one memory device comprises a non-volatile memory, and the step of saving comprises copying the internal processing state for the GPU and a memory interface state to the non-volatile memory. 20. The computer-readable storage medium of claim 19, wherein the non-volatile memory comprises at least one flash memory device coupled to the GPU via an interface that is independent of an interface to a different memory device within the memory system. 21. The computer-readable storage medium of claim 16, wherein the GPU and memory system are configured to operate at a predetermined speed and operating state prior to entering the power-down state. 22. The computer-readable storage medium of claim 16, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to entering the power-down state. 23. The computer-readable storage medium of claim 16, further comprising:
entering a power-on state; determining that the power-on state is associated with the sleep mode; causing the at least one memory device included in the memory system to exit the self-refresh mode; reloading the internal processing state for the GPU from the memory system to the GPU. 24. The computer-readable storage medium of claim 23, further comprising performing a power-on reset of the GPU and detecting a warm-boot state for transitioning the GPU from the power-down state to the power-on state. 25. The computer-readable storage medium of claim 23, wherein the at least one memory device comprises a DRAM device, and the step of reloading comprises copying the internal processing state for the GPU from the DRAM device to the GPU. 26. The computer-readable storage medium of claim 25, wherein the at least one memory device comprises a non-volatile memory device, and the step of reloading comprises copying the internal processing state for the GPU and a DRAM interface state from the non-volatile memory device to the GPU. 27. The computer-readable storage medium of claim 26, wherein the non-volatile memory device comprises at least one flash memory device, coupled to the GPU via an interface that is independent of the DRAM device. 28. The computer-readable storage medium of claim 23, wherein the GPU and the memory system are configured to operate at a predetermined speed and operating state prior to entering the power-on state. 29. The computer-readable storage medium of claim 23, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to exiting the power-on state. 30. The computer-readable storage medium of claim 26, wherein a portion of the internal processing state for the GPU and the DRAM interface state are restored to a previously configured state based on data stored within the non-volatile memory. 31. A computing device, comprising:
a memory system configured to operate in an active mode and a low power self-refresh mode; an isolation circuit coupled to the memory system and configured to shunt at least one enable signal associated with the memory system; a processing unit coupled to the memory system and to the isolation circuit and configured to:
save internal processing state for the processing unit to the memory system;
cause at least one memory device included in the memory system to enter a self-refresh mode;
enter a power-down state;
enter a power-on state;
determine that the power-on state is associated with a sleep mode;
cause the at least one memory device included in the memory system to exit the self-refresh mode; and
load the internal processing state for the processing unit from the memory system to the processing unit,
wherein the at least one enable signal is shunted to a fixed voltage prior to entering the power-down state, and
wherein the at least one enable signal is shunted to a fixed voltage prior to entering the power-on state. | A technique is disclosed for a graphics processing unit (GPU) to enter and exit a power saving deep sleep mode. The technique involves preserving processing state within local memory by configuring the local memory to operate in a self-refresh mode while the GPU is powered off for deep sleep. An interface circuit coupled to the local memory is configured to prevent spurious GPU signals from disrupting proper self-refresh of the local memory. Spurious GPU signals may result from GPU power down and GPU power up events associated with the GPU entering and exiting the deep sleep mode.1. A method implemented by a graphics processing unit (GPU) for entering and exiting sleep mode, the method comprising:
receiving a command to enter a sleep mode; saving internal processing state for the GPU to a memory system local to the GPU; causing at least one memory device included in the memory system to enter a self-refresh mode; and entering a power-down state. 2. The method of claim 1, further comprising stalling all incoming workloads and completing all in-progress processing in order to halt processing within the GPU. 3. The method of claim 1, wherein the at least one memory device comprises a dynamic random access memory (DRAM) device, and the step of saving comprises copying the internal processing state for the GPU to the DRAM device. 4. The method of claim 1, wherein the at least one memory device comprises a non-volatile memory, and the step of saving comprises copying the internal processing state for the GPU and a memory interface state to the non-volatile memory. 5. The method of claim 4, wherein the non-volatile memory comprises at least one flash memory device coupled to the GPU via an interface that is independent of an interface to a different memory device within the memory system. 6. The method of claim 1, wherein the GPU and memory system are configured to operate at a predetermined speed and operating state prior to entering the power-down state. 7. The method of claim 1, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to entering the power-down state. 8. The method of claim 1, further comprising:
entering a power-on state; determining that the power-on state is associated with the sleep mode; causing the at least one memory device included in the memory system to exit the self-refresh mode; reloading the internal processing state for the GPU from the memory system to the GPU. 9. The method of claim 8, further comprising performing a power-on reset of the GPU and detecting a warm-boot state for transitioning the GPU from the power-down state to the power-on state. 10. The method of claim 8, wherein the at least one memory device comprises a DRAM device, and the step of reloading comprises copying the internal processing state for the GPU from the DRAM device to the GPU. 11. The method of claim 10, wherein the at least one memory device comprises a non-volatile memory device, and the step of reloading comprises copying the internal processing state for the GPU and a DRAM interface state from the non-volatile memory device to the GPU. 12. The method of claim 11, wherein the non-volatile memory device comprises at least one flash memory device, coupled to the GPU via an interface that is independent of the DRAM device. 13. The method of claim 8, wherein the GPU and the memory system are configured to operate at a predetermined speed and operating state prior to entering the power-on state. 14. The method of claim 8, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to exiting the power-on state. 15. The method of claim 11, wherein a portion of the internal processing state for the GPU and the DRAM interface state are restored to a previously configured state based on data stored within the non-volatile memory. 16. A computer-readable storage medium including instructions that, when executed by a graphics processing unit (GPU), cause the GPU to enter and exit a sleep mode, by performing the steps of:
receiving a command to enter a sleep mode; saving internal processing state for the GPU to a memory system local to the GPU; causing at least one memory device included in the memory system to enter a self-refresh mode; and entering a power-down state. 17. The computer-readable storage medium of claim 16, further comprising stalling all incoming workloads and completing all in-progress processing in order to halt processing within the GPU. 18. The computer-readable storage medium of claim 16, wherein the at least one memory device comprises a dynamic random access memory (DRAM) device, and the step of saving comprises copying the internal processing state for the GPU to the DRAM device. 19. The computer-readable storage medium of claim 16, wherein the at least one memory device comprises a non-volatile memory, and the step of saving comprises copying the internal processing state for the GPU and a memory interface state to the non-volatile memory. 20. The computer-readable storage medium of claim 19, wherein the non-volatile memory comprises at least one flash memory device coupled to the GPU via an interface that is independent of an interface to a different memory device within the memory system. 21. The computer-readable storage medium of claim 16, wherein the GPU and memory system are configured to operate at a predetermined speed and operating state prior to entering the power-down state. 22. The computer-readable storage medium of claim 16, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to entering the power-down state. 23. The computer-readable storage medium of claim 16, further comprising:
entering a power-on state; determining that the power-on state is associated with the sleep mode; causing the at least one memory device included in the memory system to exit the self-refresh mode; reloading the internal processing state for the GPU from the memory system to the GPU. 24. The computer-readable storage medium of claim 23, further comprising performing a power-on reset of the GPU and detecting a warm-boot state for transitioning the GPU from the power-down state to the power-on state. 25. The computer-readable storage medium of claim 23, wherein the at least one memory device comprises a DRAM device, and the step of reloading comprises copying the internal processing state for the GPU from the DRAM device to the GPU. 26. The computer-readable storage medium of claim 25, wherein the at least one memory device comprises a non-volatile memory device, and the step of reloading comprises copying the internal processing state for the GPU and a DRAM interface state from the non-volatile memory device to the GPU. 27. The computer-readable storage medium of claim 26, wherein the non-volatile memory device comprises at least one flash memory device, coupled to the GPU via an interface that is independent of the DRAM device. 28. The computer-readable storage medium of claim 23, wherein the GPU and the memory system are configured to operate at a predetermined speed and operating state prior to entering the power-on state. 29. The computer-readable storage medium of claim 23, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to exiting the power-on state. 30. The computer-readable storage medium of claim 26, wherein a portion of the internal processing state for the GPU and the DRAM interface state are restored to a previously configured state based on data stored within the non-volatile memory. 31. A computing device, comprising:
a memory system configured to operate in an active mode and a low power self-refresh mode; an isolation circuit coupled to the memory system and configured to shunt at least one enable signal associated with the memory system; a processing unit coupled to the memory system and to the isolation circuit and configured to:
save internal processing state for the processing unit to the memory system;
cause at least one memory device included in the memory system to enter a self-refresh mode;
enter a power-down state;
enter a power-on state;
determine that the power-on state is associated with a sleep mode;
cause the at least one memory device included in the memory system to exit the self-refresh mode; and
load the internal processing state for the processing unit from the memory system to the processing unit,
wherein the at least one enable signal is shunted to a fixed voltage prior to entering the power-down state, and
wherein the at least one enable signal is shunted to a fixed voltage prior to entering the power-on state. | 2,600 |
9,707 | 9,707 | 13,379,668 | 2,669 | The invention relates to a medical imaging apparatus for providing information about an object. The invention further relates to a method for providing information about an object. In order to improve the preparation and the visual reception of information relating to an object during medical interventions, a medical imaging apparatus for providing information about an object is provided that comprises an image acquisition device an interventional device, a processing device and a display device. The image acquisition device detects object data from at least one region of interest of an object and provides the object data to the processing device. The interventional device detects physiological parameters of the object depending on the position of a predetermined area of the interventional device in relation to the object and provides the physiological parameters to the processing device. Further, the processing device transforms at least a part of the object data into image data and converts the physiological parameters into physiological data. The processing device then modifies at least one image parameter of the image data depending on the physiological data and thereby transforms the image data into modified live image data which is provided to the display device displaying a modified live image. | 1. A medical imaging apparatus (10) for providing information about an object, comprising:
an image acquisition device (11); an interventional device (20); a processing device (26); and a display device (28); wherein the image acquisition device (11) is adapted to detect object data (114) from at least one region of interest of an object (18) and to provide the object data to the processing device (26); wherein the interventional device (20) is adapted to detect physiological parameters (118) of the object depending on the position of a predetermined area (32) of the interventional device in relation to the object and to provide the physiological parameters to the processing device (26); wherein the processing device (26) is adapted to transform at least a part of the object data (114) into image data (122) and to convert the physiological parameters (118) into physiological data (126); wherein the processing device (26) is adapted to modify at least one image parameter of the image data (122) depending on the physiological data (118) and to thereby transform the image data into modified live image data (130); wherein the processing device (26) is adapted to provide the modified live image data to the display device (28); and wherein the display device (28) is adapted to display a modified live image (134). 2. Apparatus according to claim 1, wherein the image data (122) comprises a background (312; 612) and wherein at least one parameter of at least a part of the background is modified in dependency of the physiological data (126). 3. Apparatus according to claim 1, wherein the interventional device (20) is shown in the modified live image (134) and wherein a graphical representation (412 a, 412 b, 412 c) of the physiological data is shown along a trajectory (416) of the interventional device (20). 4. Apparatus according to claim 1, wherein the interventional device (20) is shown in the modified live image (134) and wherein the color (512) of the interventional device (20) is modified depending on the detected physiological data. 5. Apparatus according to claim 1, wherein the interventional device (20) is an optical needle with at least one integrated optical fibre adapted to acquire an optical spectrum to be translated into the physiological parameters. 6. Apparatus according to claim 1, wherein the intervention (20) device is adapted to detect physiological parameters of the object (18) depending on the position of at least two different predetermined areas of the interventional device in relation to the object (18). 7. Apparatus according to claim 1, wherein the contribution of blood, fat and water in the tissue at the predetermined location of the interventional device (20) is converted into a color code; and wherein the color code is used for the modification of the image data (122). 8. Apparatus according to claim 1, wherein the processing device (26) is adapted to analyze the physiological data such that the approach of an important structure is determined and the image data (122) is modified such that graphical instructional information (712 a; 712 b; 712 c) is provided to the user. 9. Apparatus according to claim 1, wherein the interventional device (20) is adapted to acquire microscopic image data of the tissue in front of the needle and to detect the physiological data from the microscopic image data; and wherein the image data is modified with at least a part of the microscopic image data. 10. A method for providing information about an object, comprising the following steps:
detecting (112) first object data (114) from at least one region of interest of an object; receiving (116) object parameters (118) of the object from a device depending on the position of a predetermined location of the device in relation to the object; transforming (120) at least a part of the object data (114) into image data (122); converting (124) the object parameters (118) into second object data (126); modifying (128) at least one image parameter of the image data (122) depending on the second object data (126) and thereby transforming the image data into modified live image data (130); and displaying (132) a modified live image (134). 11. Computer program element for controlling an apparatus according to claim 1, which, when being executed by a processing unit, is adapted to perform the method steps of claim 10. 12. Computer readable medium having stored the program element of claim 11. | The invention relates to a medical imaging apparatus for providing information about an object. The invention further relates to a method for providing information about an object. In order to improve the preparation and the visual reception of information relating to an object during medical interventions, a medical imaging apparatus for providing information about an object is provided that comprises an image acquisition device an interventional device, a processing device and a display device. The image acquisition device detects object data from at least one region of interest of an object and provides the object data to the processing device. The interventional device detects physiological parameters of the object depending on the position of a predetermined area of the interventional device in relation to the object and provides the physiological parameters to the processing device. Further, the processing device transforms at least a part of the object data into image data and converts the physiological parameters into physiological data. The processing device then modifies at least one image parameter of the image data depending on the physiological data and thereby transforms the image data into modified live image data which is provided to the display device displaying a modified live image.1. A medical imaging apparatus (10) for providing information about an object, comprising:
an image acquisition device (11); an interventional device (20); a processing device (26); and a display device (28); wherein the image acquisition device (11) is adapted to detect object data (114) from at least one region of interest of an object (18) and to provide the object data to the processing device (26); wherein the interventional device (20) is adapted to detect physiological parameters (118) of the object depending on the position of a predetermined area (32) of the interventional device in relation to the object and to provide the physiological parameters to the processing device (26); wherein the processing device (26) is adapted to transform at least a part of the object data (114) into image data (122) and to convert the physiological parameters (118) into physiological data (126); wherein the processing device (26) is adapted to modify at least one image parameter of the image data (122) depending on the physiological data (118) and to thereby transform the image data into modified live image data (130); wherein the processing device (26) is adapted to provide the modified live image data to the display device (28); and wherein the display device (28) is adapted to display a modified live image (134). 2. Apparatus according to claim 1, wherein the image data (122) comprises a background (312; 612) and wherein at least one parameter of at least a part of the background is modified in dependency of the physiological data (126). 3. Apparatus according to claim 1, wherein the interventional device (20) is shown in the modified live image (134) and wherein a graphical representation (412 a, 412 b, 412 c) of the physiological data is shown along a trajectory (416) of the interventional device (20). 4. Apparatus according to claim 1, wherein the interventional device (20) is shown in the modified live image (134) and wherein the color (512) of the interventional device (20) is modified depending on the detected physiological data. 5. Apparatus according to claim 1, wherein the interventional device (20) is an optical needle with at least one integrated optical fibre adapted to acquire an optical spectrum to be translated into the physiological parameters. 6. Apparatus according to claim 1, wherein the intervention (20) device is adapted to detect physiological parameters of the object (18) depending on the position of at least two different predetermined areas of the interventional device in relation to the object (18). 7. Apparatus according to claim 1, wherein the contribution of blood, fat and water in the tissue at the predetermined location of the interventional device (20) is converted into a color code; and wherein the color code is used for the modification of the image data (122). 8. Apparatus according to claim 1, wherein the processing device (26) is adapted to analyze the physiological data such that the approach of an important structure is determined and the image data (122) is modified such that graphical instructional information (712 a; 712 b; 712 c) is provided to the user. 9. Apparatus according to claim 1, wherein the interventional device (20) is adapted to acquire microscopic image data of the tissue in front of the needle and to detect the physiological data from the microscopic image data; and wherein the image data is modified with at least a part of the microscopic image data. 10. A method for providing information about an object, comprising the following steps:
detecting (112) first object data (114) from at least one region of interest of an object; receiving (116) object parameters (118) of the object from a device depending on the position of a predetermined location of the device in relation to the object; transforming (120) at least a part of the object data (114) into image data (122); converting (124) the object parameters (118) into second object data (126); modifying (128) at least one image parameter of the image data (122) depending on the second object data (126) and thereby transforming the image data into modified live image data (130); and displaying (132) a modified live image (134). 11. Computer program element for controlling an apparatus according to claim 1, which, when being executed by a processing unit, is adapted to perform the method steps of claim 10. 12. Computer readable medium having stored the program element of claim 11. | 2,600 |
9,708 | 9,708 | 15,298,351 | 2,628 | An information processing apparatus includes an imaging unit, an icon display control unit causing a display to display an operation icon, a pickup image display processing unit causing the display to sequentially display an input operation region image constituted by, among pixel regions constituting an image picked up by the imaging unit, a pixel region including at least a portion of a hand of a user, an icon management unit managing event issue definition information, which is a condition for determining that the operation icon has been operated by the user, for each operation icon, an operation determination unit determining whether the user has operated the operation icon based on the input operation region image displayed in the display and the event issue definition information, and a processing execution unit performing predetermined processing corresponding to the operation icon in accordance with a determination result by the operation determination unit. | 1-16. (canceled) 17. An information processing apparatus, comprising:
an imaging unit; a sound input unit configured to receive an audio signal; a sound recognition processing unit configured to determine whether at least one characteristic of the received audio signal corresponds to an interface, the interface identifying at least one element of content viewable by a user; a detecting unit configured to detect a first movement of a human appendage within successive images obtained by the imaging unit; an operation determination unit configured to determine whether the detected first movement corresponds to a first predetermined movement associated with the interface, wherein the operation determination unit performs at least:
identifying, within pixel regions constituting a current image of the successive images, (i) a first pixel region that includes at least a portion of the human appendage, and (ii) an icon pixel region that includes an icon enabling the user to access the interface;
calculating a first center-of-gravity for the first pixel region; and
determining whether the first center-of-gravity falls within the icon pixel region; and
a processing unit configured to generate one or more instructions to:
display the interface to the user when the at least one characteristic of the received audio signal corresponds to the interface; and
after the interface is displayed, perform an operation on the at least one element of content identified by the interface when the detected first movement corresponds to the first predetermined movement. 18. The information processing apparatus of claim 17, wherein:
the successive images obtained by the imaging unit comprise the current image and a plurality of prior successive images; and the operation determination unit is further configured to:
when the first center-of-gravity is determined to fall within the icon pixel region, identify second pixel regions, within pixels regions of a subset of the prior successive pixel regions, that include at least the portion of the human appendage;
calculate second centers-of-gravity for the second pixel regions. 19. The information processing apparatus of claim 18, wherein the operation determination unit is further configured to:
compute a motion vector representative of the detected movement based on the calculated first and second centers-of-gravity; and determine whether the detected first movement corresponds to the first predetermined movement based on the computed motion vector. 20. The information processing apparatus of claim 19, wherein the operation determination unit is further configured to:
determine a trajectory of the detected first motion based on the calculated first and second centers-of-gravity, the determined trajectory being representative of the detected first movement of the human appendage; and compute the motion vector based on the determined trajectory. 21. The information processing apparatus of claim 17, further comprising a display unit, the display unit being configured to:
receive the generated instructions from the processing unit; and in response to the received instructions, display the interface identifying the at least one element of viewable content. 22. The information processing apparatus of claim 21, wherein the first predetermined movement corresponds to a downward motion of the human appendage along a vertical axis of the display unit, the predetermined movement occurring within a current one of the successive image and a plurality of prior ones of the successive images. 23. The information processing apparatus of claim 21, wherein:
the interface corresponds to a program guide identifying the at least one element of viewable content; and in response to the received instructions, the display unit is further configured to display a visual representation of at least a portion of the program guide to the user, the visual representation comprising a two-dimensional grid of viewable content arranged in accordance with viewing times and channels. 24. The information processing apparatus of claim 21, wherein the display unit is further configured to display, in response to the received instructions, at least one navigation portion being disposed along at least one of a vertical or horizontal edge of the display unit. 25. The information processing apparatus of claim 24, wherein:
the detecting unit is configured to detect a second movement of the human appendage within additional images obtained by the imaging unit; the operation determination unit is configured to determine whether the detected second movement corresponds to a second predetermined movement associated with the navigation portion; and the processing unit is configured to generate an instruction to modify at least a portion of the displayed interface when the detected second movement corresponds to the second predetermined movement. 26. The information processing apparatus of claim 24, wherein the at least one navigation portion corresponds to at least one of (i) a horizontal scroll bar disposed along the horizontal edge of the display unit or (ii) a vertical scroll bar disposed along the vertical edge of the display unit. 27. The information processing apparatus of claim 17, wherein the operation determination unit is further configured to obtain event definition information associated with the interface, the event definition information identifying the first predetermined movement. 28. The information processing apparatus of claim 17, wherein the human appendage comprises at least one of a human finger or a human hand. 29. The information processing apparatus of claim 17, wherein the detecting unit is configured to detect the first movement of the human appendage based on a flesh color associated with the human appendage. 30. The information processing apparatus of claim 29, wherein the detection unit is further configured to:
compute a metric indicative of a presence of the flesh color within the first region of a current one of the successive images; determine whether a value of the metric exceeds a threshold value; and identify the portion of the user appendage within the first region, when the metric value exceeds the threshold value. 31. The information processing apparatus of claim 30, wherein the metric represents a number of pixels within the first region associated with at least one of (i) a hue that corresponds to a predetermined value or (ii) a hue that falls within a range of predetermined values. 32. The information processing apparatus of claim 30, wherein the detection unit is further configured to:
identify the portion of the user appendage within one or more prior ones of the successive images; obtain a temporal period during which the imaging unit obtained a current one of the successive images and the one or more prior images; determine whether the temporal period exceeds a threshold time period; and detect the movement of the user appendage when the temporal period exceeds the threshold time period. 33. A computer-implemented method, comprising:
detecting a first movement of a human appendage within successive images obtained by an imaging unit of an information processing apparatus; obtaining an audio signal received by a sound input unit of the information processing apparatus; determining whether at least one characteristic of the received sound corresponds to an interface, the interface identifying at least one element of content viewable by a user; determining whether the detected first movement corresponds to a first predetermined movement associated with the interface, comprising:
identifying, within pixel regions constituting a current image of the successive images, (i) a first pixel region that includes at least a portion of the human appendage, and (ii) an icon pixel region that includes an icon enabling the user to access the interface;
calculating a first center-of-gravity for the first pixel region; and
determining whether the first center-of-gravity falls within the icon pixel region; and
generating one or more instructions to:
display the interface to the user when the at least one characteristic of the received audio signal corresponds to the interface; and
after the interface is displayed, perform an operation on the at least one element of content identified by the interface when the detected first movement corresponds to the first predetermined movement. 34. A tangible, non-transitory computer-readable medium storing instructions that, when executed by at least one processor, perform a method for simulating an interactive conversation with a recorded subject, the method comprising the steps of:
detecting a first movement of a human appendage within successive images obtained by an imaging unit of an information processing apparatus; obtaining an audio signal received by a sound input unit of the information processing apparatus; determining whether at least one characteristic of the received sound corresponds to an interface, the interface identifying at least one element of content viewable by a user; determining whether the detected first movement corresponds to a first predetermined movement associated with the interface, comprising:
identifying, within pixel regions constituting a current image of the successive images, (i) a first pixel region that includes at least a portion of the human appendage, and (ii) an icon pixel region that includes an icon enabling the user to access the interface;
calculating a first center-of-gravity for the first pixel region; and
determining whether the first center-of-gravity falls within the icon pixel region; and
generating one or more instructions to:
display the interface to the user when the at least one characteristic of the received audio signal corresponds to the interface; and
after the interface is displayed, perform an operation on the at least one element of content identified by the interface when the detected first movement corresponds to the first predetermined movement. | An information processing apparatus includes an imaging unit, an icon display control unit causing a display to display an operation icon, a pickup image display processing unit causing the display to sequentially display an input operation region image constituted by, among pixel regions constituting an image picked up by the imaging unit, a pixel region including at least a portion of a hand of a user, an icon management unit managing event issue definition information, which is a condition for determining that the operation icon has been operated by the user, for each operation icon, an operation determination unit determining whether the user has operated the operation icon based on the input operation region image displayed in the display and the event issue definition information, and a processing execution unit performing predetermined processing corresponding to the operation icon in accordance with a determination result by the operation determination unit.1-16. (canceled) 17. An information processing apparatus, comprising:
an imaging unit; a sound input unit configured to receive an audio signal; a sound recognition processing unit configured to determine whether at least one characteristic of the received audio signal corresponds to an interface, the interface identifying at least one element of content viewable by a user; a detecting unit configured to detect a first movement of a human appendage within successive images obtained by the imaging unit; an operation determination unit configured to determine whether the detected first movement corresponds to a first predetermined movement associated with the interface, wherein the operation determination unit performs at least:
identifying, within pixel regions constituting a current image of the successive images, (i) a first pixel region that includes at least a portion of the human appendage, and (ii) an icon pixel region that includes an icon enabling the user to access the interface;
calculating a first center-of-gravity for the first pixel region; and
determining whether the first center-of-gravity falls within the icon pixel region; and
a processing unit configured to generate one or more instructions to:
display the interface to the user when the at least one characteristic of the received audio signal corresponds to the interface; and
after the interface is displayed, perform an operation on the at least one element of content identified by the interface when the detected first movement corresponds to the first predetermined movement. 18. The information processing apparatus of claim 17, wherein:
the successive images obtained by the imaging unit comprise the current image and a plurality of prior successive images; and the operation determination unit is further configured to:
when the first center-of-gravity is determined to fall within the icon pixel region, identify second pixel regions, within pixels regions of a subset of the prior successive pixel regions, that include at least the portion of the human appendage;
calculate second centers-of-gravity for the second pixel regions. 19. The information processing apparatus of claim 18, wherein the operation determination unit is further configured to:
compute a motion vector representative of the detected movement based on the calculated first and second centers-of-gravity; and determine whether the detected first movement corresponds to the first predetermined movement based on the computed motion vector. 20. The information processing apparatus of claim 19, wherein the operation determination unit is further configured to:
determine a trajectory of the detected first motion based on the calculated first and second centers-of-gravity, the determined trajectory being representative of the detected first movement of the human appendage; and compute the motion vector based on the determined trajectory. 21. The information processing apparatus of claim 17, further comprising a display unit, the display unit being configured to:
receive the generated instructions from the processing unit; and in response to the received instructions, display the interface identifying the at least one element of viewable content. 22. The information processing apparatus of claim 21, wherein the first predetermined movement corresponds to a downward motion of the human appendage along a vertical axis of the display unit, the predetermined movement occurring within a current one of the successive image and a plurality of prior ones of the successive images. 23. The information processing apparatus of claim 21, wherein:
the interface corresponds to a program guide identifying the at least one element of viewable content; and in response to the received instructions, the display unit is further configured to display a visual representation of at least a portion of the program guide to the user, the visual representation comprising a two-dimensional grid of viewable content arranged in accordance with viewing times and channels. 24. The information processing apparatus of claim 21, wherein the display unit is further configured to display, in response to the received instructions, at least one navigation portion being disposed along at least one of a vertical or horizontal edge of the display unit. 25. The information processing apparatus of claim 24, wherein:
the detecting unit is configured to detect a second movement of the human appendage within additional images obtained by the imaging unit; the operation determination unit is configured to determine whether the detected second movement corresponds to a second predetermined movement associated with the navigation portion; and the processing unit is configured to generate an instruction to modify at least a portion of the displayed interface when the detected second movement corresponds to the second predetermined movement. 26. The information processing apparatus of claim 24, wherein the at least one navigation portion corresponds to at least one of (i) a horizontal scroll bar disposed along the horizontal edge of the display unit or (ii) a vertical scroll bar disposed along the vertical edge of the display unit. 27. The information processing apparatus of claim 17, wherein the operation determination unit is further configured to obtain event definition information associated with the interface, the event definition information identifying the first predetermined movement. 28. The information processing apparatus of claim 17, wherein the human appendage comprises at least one of a human finger or a human hand. 29. The information processing apparatus of claim 17, wherein the detecting unit is configured to detect the first movement of the human appendage based on a flesh color associated with the human appendage. 30. The information processing apparatus of claim 29, wherein the detection unit is further configured to:
compute a metric indicative of a presence of the flesh color within the first region of a current one of the successive images; determine whether a value of the metric exceeds a threshold value; and identify the portion of the user appendage within the first region, when the metric value exceeds the threshold value. 31. The information processing apparatus of claim 30, wherein the metric represents a number of pixels within the first region associated with at least one of (i) a hue that corresponds to a predetermined value or (ii) a hue that falls within a range of predetermined values. 32. The information processing apparatus of claim 30, wherein the detection unit is further configured to:
identify the portion of the user appendage within one or more prior ones of the successive images; obtain a temporal period during which the imaging unit obtained a current one of the successive images and the one or more prior images; determine whether the temporal period exceeds a threshold time period; and detect the movement of the user appendage when the temporal period exceeds the threshold time period. 33. A computer-implemented method, comprising:
detecting a first movement of a human appendage within successive images obtained by an imaging unit of an information processing apparatus; obtaining an audio signal received by a sound input unit of the information processing apparatus; determining whether at least one characteristic of the received sound corresponds to an interface, the interface identifying at least one element of content viewable by a user; determining whether the detected first movement corresponds to a first predetermined movement associated with the interface, comprising:
identifying, within pixel regions constituting a current image of the successive images, (i) a first pixel region that includes at least a portion of the human appendage, and (ii) an icon pixel region that includes an icon enabling the user to access the interface;
calculating a first center-of-gravity for the first pixel region; and
determining whether the first center-of-gravity falls within the icon pixel region; and
generating one or more instructions to:
display the interface to the user when the at least one characteristic of the received audio signal corresponds to the interface; and
after the interface is displayed, perform an operation on the at least one element of content identified by the interface when the detected first movement corresponds to the first predetermined movement. 34. A tangible, non-transitory computer-readable medium storing instructions that, when executed by at least one processor, perform a method for simulating an interactive conversation with a recorded subject, the method comprising the steps of:
detecting a first movement of a human appendage within successive images obtained by an imaging unit of an information processing apparatus; obtaining an audio signal received by a sound input unit of the information processing apparatus; determining whether at least one characteristic of the received sound corresponds to an interface, the interface identifying at least one element of content viewable by a user; determining whether the detected first movement corresponds to a first predetermined movement associated with the interface, comprising:
identifying, within pixel regions constituting a current image of the successive images, (i) a first pixel region that includes at least a portion of the human appendage, and (ii) an icon pixel region that includes an icon enabling the user to access the interface;
calculating a first center-of-gravity for the first pixel region; and
determining whether the first center-of-gravity falls within the icon pixel region; and
generating one or more instructions to:
display the interface to the user when the at least one characteristic of the received audio signal corresponds to the interface; and
after the interface is displayed, perform an operation on the at least one element of content identified by the interface when the detected first movement corresponds to the first predetermined movement. | 2,600 |
9,709 | 9,709 | 15,059,067 | 2,616 | A digital medium environment is described to generate a three dimensional facial expression from a blend shape and a facial expression source. A semantic type is detected that defines a facial expression of the blend shape. Transfer intensities are assigned based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression. The three dimensional facial expression is generated from the blend shape and the facial expression source based on the assigned transfer intensities. | 1. In a digital medium environment to generate a three dimensional facial expression from a blend shape and a facial expression source, a method implemented by a computing device, the method comprising:
detecting, by the computing device, a semantic type defining a facial expression of the blend shape; assigning, by the computing device, transfer intensities based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression; and generating, by the computing device, the three dimensional facial expression from the blend shape and the facial expression source based on the assigned transfer intensities. 2. The method as described in claim 1, wherein the detecting of the semantic type includes detecting a change of at least one facial part of the blend shape in comparison to the facial expression source. 3. The method as described in claim 1, further comprising creating, by the computing device, the facial expression source as a model in three-dimensions from a two-dimensional image by using a template and wherein the detecting of the semantic type is performed by comparing the model to the blend shape. 4. The method as described in claim 3, wherein the generating uses the two-dimensional image as a source of texture for the three dimensional facial expression. 5. The method as described in claim 1, further comprising repeating the detecting, the assigning, and the generating for a plurality of said blend shapes using the facial expression source. 6. The method as described in claim 1, wherein the assigning of the transfer intensities is based at least in part on an amount of movement detected for respective said portions from the facial expression source to the blend shape. 7. The method as described in claim 6, wherein the assigning includes assigned increasingly greater amounts of said weight to the blend shape than the facial expression source to a respective said portion in response to increasingly greater detected amounts of the movement for the respective points. 8. The method as described in claim 1, further comprising forming, by the computing device, an animation including the facial expression source as a frame and three dimensional facial expression as another frame. 9. The method as described in claim 8, wherein the forming includes at least one other frame disposed in a sequence in the animation between the frame and the other frame, the at least one other frame formed by blending the facial expression source and the three dimensional facial expression. 10. In a digital medium environment to generate a three dimensional facial expression from a blend shape and a facial expression source, a method implemented by a computing device, the method comprising:
detecting, by the computing device, a semantic type defining a facial expression of the blend shape; defining, by the computing device, points on the facial expression source as corresponding to points on the blend shape based on the detected semantic type of the blend shape; assigning, by the computing device, transfer intensities to the points of the blend shape and the facial expression source, respectively, to form respective points of a plurality of points of the three dimensional facial expression, the assigning being nonlinear for the plurality of points, one to another, of the three dimensional facial expression based on the detected semantic type; and generating, by the computing device, the three dimensional facial expression from the blend shape and the facial expression source by forming the plurality of points using the assigned non-linear transfer intensities for respective said points of the blend shape and the facial expression source. 11. The method as described in claim 10, wherein the transfer intensities specify weights given to the blend shape and the facial expression source, respectively, for generating individual points of the plurality of points of the three dimensional facial expression. 12. The method as described in claim 10, wherein the assigning of the transfer intensities non-linearly includes:
computing, by the computing device, an initial said transfer intensity based on spatial differences between the blend shape and a model of the facial expression source; and augmenting, by the computing device, the initial said transfer intensity non-linearly based on comparison of the points on the facial expression source to respective said points of the blend shape. 13. The method as described in claim 10, wherein the generating includes computing, by the computing device, a smoothness factor based on vertices of the detected semantic type of the blend shape, the vertices defined at least in part using the points of the of blend shape. 14. The method as described in claim 10, wherein the assigning of the transfer intensities includes optimizing the plurality of points of the three dimensional facial expression by minimizing a difference between neighboring vertices in a mesh formed using the points of the facial expression source and maximizing a closeness to an amount of movement defined through comparison of corresponding said points of the blend shape and the facial expression source. 15. In a digital medium environment to generate a three dimensional facial expression from a blend shape and a facial expression source, a system comprising:
a semantic type detection module implemented at least partially in hardware to detect a semantic type defining a facial expression of the blend shape; a transfer intensity assignment module implemented at least partially in hardware to assign transfer intensities to the blend shape and the facial expression source, respectively, for individual points of the three dimensional facial expression, the assignment based on the detected semantic type; and a three dimensional facial expression generation module implemented at least partially in hardware to generate the three dimensional facial expression from the blend shape and the facial expression source based on the assigned transfer intensities. 16. The system as described in claim 15, wherein the semantic type detection module is configured to detect the semantic type by detecting a change at least one facial part of the blend shape in comparison to the facial expression source. 17. The system as described in claim 15, wherein the transfer intensity assignment module is configured to assign the transfer intensities based at least in part on detecting an amount of movement detected for respective points between the blend shape and a model of the facial expression source. 18. The system as described in claim 17, wherein the transfer intensity assignment module is configured to assign the transfer intensities by assigning increasingly greater amounts of said weight to the blend shape than the facial expression source in response to increasingly greater detected amounts of the movement for the respective points. 19. The system as described in claim 17, wherein the transfer intensity assignment module is configured to assign the transfer intensities by assigning increasingly lesser amounts of said weight to the blend shape than the facial expression source in response to increasingly lesser detected amounts of the movement for the respective points. 20. The system as described in claim 15, wherein the transfer intensity assignment module is configured to assign the transfer intensities by optimizing a plurality of points of the three dimensional facial expression by minimizing a difference between neighboring vertices in a mesh formed using points of the facial expression source and maximizing a closeness to an amount of movement defined through comparison of corresponding points of the blend shape and the facial expression source. | A digital medium environment is described to generate a three dimensional facial expression from a blend shape and a facial expression source. A semantic type is detected that defines a facial expression of the blend shape. Transfer intensities are assigned based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression. The three dimensional facial expression is generated from the blend shape and the facial expression source based on the assigned transfer intensities.1. In a digital medium environment to generate a three dimensional facial expression from a blend shape and a facial expression source, a method implemented by a computing device, the method comprising:
detecting, by the computing device, a semantic type defining a facial expression of the blend shape; assigning, by the computing device, transfer intensities based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression; and generating, by the computing device, the three dimensional facial expression from the blend shape and the facial expression source based on the assigned transfer intensities. 2. The method as described in claim 1, wherein the detecting of the semantic type includes detecting a change of at least one facial part of the blend shape in comparison to the facial expression source. 3. The method as described in claim 1, further comprising creating, by the computing device, the facial expression source as a model in three-dimensions from a two-dimensional image by using a template and wherein the detecting of the semantic type is performed by comparing the model to the blend shape. 4. The method as described in claim 3, wherein the generating uses the two-dimensional image as a source of texture for the three dimensional facial expression. 5. The method as described in claim 1, further comprising repeating the detecting, the assigning, and the generating for a plurality of said blend shapes using the facial expression source. 6. The method as described in claim 1, wherein the assigning of the transfer intensities is based at least in part on an amount of movement detected for respective said portions from the facial expression source to the blend shape. 7. The method as described in claim 6, wherein the assigning includes assigned increasingly greater amounts of said weight to the blend shape than the facial expression source to a respective said portion in response to increasingly greater detected amounts of the movement for the respective points. 8. The method as described in claim 1, further comprising forming, by the computing device, an animation including the facial expression source as a frame and three dimensional facial expression as another frame. 9. The method as described in claim 8, wherein the forming includes at least one other frame disposed in a sequence in the animation between the frame and the other frame, the at least one other frame formed by blending the facial expression source and the three dimensional facial expression. 10. In a digital medium environment to generate a three dimensional facial expression from a blend shape and a facial expression source, a method implemented by a computing device, the method comprising:
detecting, by the computing device, a semantic type defining a facial expression of the blend shape; defining, by the computing device, points on the facial expression source as corresponding to points on the blend shape based on the detected semantic type of the blend shape; assigning, by the computing device, transfer intensities to the points of the blend shape and the facial expression source, respectively, to form respective points of a plurality of points of the three dimensional facial expression, the assigning being nonlinear for the plurality of points, one to another, of the three dimensional facial expression based on the detected semantic type; and generating, by the computing device, the three dimensional facial expression from the blend shape and the facial expression source by forming the plurality of points using the assigned non-linear transfer intensities for respective said points of the blend shape and the facial expression source. 11. The method as described in claim 10, wherein the transfer intensities specify weights given to the blend shape and the facial expression source, respectively, for generating individual points of the plurality of points of the three dimensional facial expression. 12. The method as described in claim 10, wherein the assigning of the transfer intensities non-linearly includes:
computing, by the computing device, an initial said transfer intensity based on spatial differences between the blend shape and a model of the facial expression source; and augmenting, by the computing device, the initial said transfer intensity non-linearly based on comparison of the points on the facial expression source to respective said points of the blend shape. 13. The method as described in claim 10, wherein the generating includes computing, by the computing device, a smoothness factor based on vertices of the detected semantic type of the blend shape, the vertices defined at least in part using the points of the of blend shape. 14. The method as described in claim 10, wherein the assigning of the transfer intensities includes optimizing the plurality of points of the three dimensional facial expression by minimizing a difference between neighboring vertices in a mesh formed using the points of the facial expression source and maximizing a closeness to an amount of movement defined through comparison of corresponding said points of the blend shape and the facial expression source. 15. In a digital medium environment to generate a three dimensional facial expression from a blend shape and a facial expression source, a system comprising:
a semantic type detection module implemented at least partially in hardware to detect a semantic type defining a facial expression of the blend shape; a transfer intensity assignment module implemented at least partially in hardware to assign transfer intensities to the blend shape and the facial expression source, respectively, for individual points of the three dimensional facial expression, the assignment based on the detected semantic type; and a three dimensional facial expression generation module implemented at least partially in hardware to generate the three dimensional facial expression from the blend shape and the facial expression source based on the assigned transfer intensities. 16. The system as described in claim 15, wherein the semantic type detection module is configured to detect the semantic type by detecting a change at least one facial part of the blend shape in comparison to the facial expression source. 17. The system as described in claim 15, wherein the transfer intensity assignment module is configured to assign the transfer intensities based at least in part on detecting an amount of movement detected for respective points between the blend shape and a model of the facial expression source. 18. The system as described in claim 17, wherein the transfer intensity assignment module is configured to assign the transfer intensities by assigning increasingly greater amounts of said weight to the blend shape than the facial expression source in response to increasingly greater detected amounts of the movement for the respective points. 19. The system as described in claim 17, wherein the transfer intensity assignment module is configured to assign the transfer intensities by assigning increasingly lesser amounts of said weight to the blend shape than the facial expression source in response to increasingly lesser detected amounts of the movement for the respective points. 20. The system as described in claim 15, wherein the transfer intensity assignment module is configured to assign the transfer intensities by optimizing a plurality of points of the three dimensional facial expression by minimizing a difference between neighboring vertices in a mesh formed using points of the facial expression source and maximizing a closeness to an amount of movement defined through comparison of corresponding points of the blend shape and the facial expression source. | 2,600 |
9,710 | 9,710 | 14,652,387 | 2,646 | A method for estimating the electric field strength associated to a radio wave emitted by an electromagnetic source of a cellular radio communication network within an area. The method includes: identifying a set of obstacles; determining at least one of: a direct visibility polygon of points in line of sight with the source; a reflection visibility polygon of points reachable by the wave after reflection by the obstacles; a diffraction visibility polygon of points reachable by the wave after diffraction by the obstacles. The visibility polygons are associated to respective values of the electric field computed therein. The method further includes: subdividing the area into pixels; for each pixel, determining if it belongs to at least one of the visibility polygons; and in the affirmative, determining the electric field strength at the pixel as a value proportional to the electric field computed at the at least one visibility polygon. | 1-13. (canceled) 14. A method for estimating electric field strength associated to a radio wave emitted by an electromagnetic source of a cellular radio communication network within an area of investigation, the method comprising:
a) identifying a set of obstacles within the area of investigation; b) determining at least one of:
a direct visibility polygon as a polygonal region within the area comprising points which are in line of sight with the electromagnetic source;
a reflection visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been reflected by at least one of the obstacles;
a diffraction visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been diffracted by at least one of the obstacles; and
wherein the direct visibility polygon, the reflection visibility polygon, and the diffraction visibility polygon are associated to respective values of the electric field computed therein; c) subdividing the area of investigation into a set of pixels; d) for each pixel of the set, determining if it belongs to at least one of the visibility polygons; and e) in affirmative of the determining, determining electric field strength at the considered pixel as a value proportional to the value of the electric field computed at the at least one visibility polygon. 15. The method according to claim 14, wherein a) comprises:
associating to each of the obstacles a respective obstacle polygon, each obstacle polygon being a two-dimensional polygon whose perimeter corresponds to an external boundary of the corresponding obstacle; computing, for each obstacle polygon, a distance from the electromagnetic source; and ordering the obstacle polygons according to increasing values of the computed distance. 16. The method according to claim 15, wherein a) further comprises:
determining a set of vertices for each of the obstacle polygons within the area of investigation; and ordering the vertices according to a direction along a boundary of the obstacle polygon. 17. The method according to claim 16, wherein b) comprises determining one or more visible sides of the obstacle polygons, each one of the visible sides being a side in line of sight with the electromagnetic source. 18. The method according to claim 17, wherein the at least one direct visibility polygon is determined by:
computing, for each visible side of the obstacle polygons that is in line of sight with the electromagnetic source, a shadow region as the region of points that are not visible from the electromagnetic source due to the considered visible side; determining the direct visibility polygon as the region of points which do not belong to any one of the shadow region. 19. The method according to claim 18, wherein the computing is performed by considering the obstacle polygons according to increasing values of their distance from the electromagnetic source. 20. The method according to claim 17, wherein the at least one reflection visibility polygon is determined by:
determining, for each one of the visible sides, an image source located at a position which is symmetric to the position of the electromagnetic source with respect to the considered visible side; determining a fictitious shadow region as the region of points that are not visible from the image source due to the considered visible side; determining, in the fictitious shadow region, sides or portions of sides of the obstacle polygons that are visible from the image source; for each one of the identified sides or portions of side visible from the image source, determining a further fictitious shadow region as the region of points that are not visible from the image source due to the considered side or portion of side; and determining the reflection visibility polygon as the region of points of the fictitious shadow region which do not belong to the further fictitious shadow regions. 21. The method according to claim 17, wherein the at least one diffraction visibility polygon is determined by:
identifying at least one diffraction edge of at least one of the obstacle polygons; associating the diffraction edge to an equivalent electromagnetic source; determining at least one visible side of the obstacle polygons that is in line of sight with the equivalent electromagnetic source; computing, for the at least one visible side of the obstacle polygons, a shadow region as the region of points that are not visible from the equivalent electromagnetic source due to the at least one visible side; determining the diffraction visibility polygon as the region of points which do not belong to the shadow region. 22. The method according to claim 21, wherein the identifying comprises identifying the at least one diffraction edge as an edge between two sides of an obstacle polygon forming a convex angle lower than 150°. 23. The method according to claim 14, further comprising displaying a map of the computed electric field strength inside the investigation area. 24. A system for estimating electric field strength associated to a radio wave emitted by an electromagnetic source of a cellular radio communication network within an area of investigation, the system comprising:
a processor configured to:
identify a set of obstacles within the area of investigation;
determine at least one of:
a direct visibility polygon as a polygonal region within the area comprising points which are in line of sight with the electromagnetic source;
a reflection visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been reflected by at least one of the obstacles;
a diffraction visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been diffracted by at least one of the obstacles;
wherein the direct visibility polygon, the reflection visibility polygon, and the diffraction visibility polygon are associated to respective values of the electric field computed therein; subdivide the area of investigation into a set of pixels; for each pixel of the set, determine if it belongs to at least one of the visibility polygons; and in affirmative of the determine, determine electric field strength at the considered pixel as a value proportional to the value of the electric field computed at the at least one visibility polygon. 25. The system according to claim 24, further comprising an output module configured to provide a map of the computed electric field strength inside the investigation area. 26. A non-transitory computer program product comprising computer-executable instructions for performing, when the program is run on a computer, the method according to claim 14. | A method for estimating the electric field strength associated to a radio wave emitted by an electromagnetic source of a cellular radio communication network within an area. The method includes: identifying a set of obstacles; determining at least one of: a direct visibility polygon of points in line of sight with the source; a reflection visibility polygon of points reachable by the wave after reflection by the obstacles; a diffraction visibility polygon of points reachable by the wave after diffraction by the obstacles. The visibility polygons are associated to respective values of the electric field computed therein. The method further includes: subdividing the area into pixels; for each pixel, determining if it belongs to at least one of the visibility polygons; and in the affirmative, determining the electric field strength at the pixel as a value proportional to the electric field computed at the at least one visibility polygon.1-13. (canceled) 14. A method for estimating electric field strength associated to a radio wave emitted by an electromagnetic source of a cellular radio communication network within an area of investigation, the method comprising:
a) identifying a set of obstacles within the area of investigation; b) determining at least one of:
a direct visibility polygon as a polygonal region within the area comprising points which are in line of sight with the electromagnetic source;
a reflection visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been reflected by at least one of the obstacles;
a diffraction visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been diffracted by at least one of the obstacles; and
wherein the direct visibility polygon, the reflection visibility polygon, and the diffraction visibility polygon are associated to respective values of the electric field computed therein; c) subdividing the area of investigation into a set of pixels; d) for each pixel of the set, determining if it belongs to at least one of the visibility polygons; and e) in affirmative of the determining, determining electric field strength at the considered pixel as a value proportional to the value of the electric field computed at the at least one visibility polygon. 15. The method according to claim 14, wherein a) comprises:
associating to each of the obstacles a respective obstacle polygon, each obstacle polygon being a two-dimensional polygon whose perimeter corresponds to an external boundary of the corresponding obstacle; computing, for each obstacle polygon, a distance from the electromagnetic source; and ordering the obstacle polygons according to increasing values of the computed distance. 16. The method according to claim 15, wherein a) further comprises:
determining a set of vertices for each of the obstacle polygons within the area of investigation; and ordering the vertices according to a direction along a boundary of the obstacle polygon. 17. The method according to claim 16, wherein b) comprises determining one or more visible sides of the obstacle polygons, each one of the visible sides being a side in line of sight with the electromagnetic source. 18. The method according to claim 17, wherein the at least one direct visibility polygon is determined by:
computing, for each visible side of the obstacle polygons that is in line of sight with the electromagnetic source, a shadow region as the region of points that are not visible from the electromagnetic source due to the considered visible side; determining the direct visibility polygon as the region of points which do not belong to any one of the shadow region. 19. The method according to claim 18, wherein the computing is performed by considering the obstacle polygons according to increasing values of their distance from the electromagnetic source. 20. The method according to claim 17, wherein the at least one reflection visibility polygon is determined by:
determining, for each one of the visible sides, an image source located at a position which is symmetric to the position of the electromagnetic source with respect to the considered visible side; determining a fictitious shadow region as the region of points that are not visible from the image source due to the considered visible side; determining, in the fictitious shadow region, sides or portions of sides of the obstacle polygons that are visible from the image source; for each one of the identified sides or portions of side visible from the image source, determining a further fictitious shadow region as the region of points that are not visible from the image source due to the considered side or portion of side; and determining the reflection visibility polygon as the region of points of the fictitious shadow region which do not belong to the further fictitious shadow regions. 21. The method according to claim 17, wherein the at least one diffraction visibility polygon is determined by:
identifying at least one diffraction edge of at least one of the obstacle polygons; associating the diffraction edge to an equivalent electromagnetic source; determining at least one visible side of the obstacle polygons that is in line of sight with the equivalent electromagnetic source; computing, for the at least one visible side of the obstacle polygons, a shadow region as the region of points that are not visible from the equivalent electromagnetic source due to the at least one visible side; determining the diffraction visibility polygon as the region of points which do not belong to the shadow region. 22. The method according to claim 21, wherein the identifying comprises identifying the at least one diffraction edge as an edge between two sides of an obstacle polygon forming a convex angle lower than 150°. 23. The method according to claim 14, further comprising displaying a map of the computed electric field strength inside the investigation area. 24. A system for estimating electric field strength associated to a radio wave emitted by an electromagnetic source of a cellular radio communication network within an area of investigation, the system comprising:
a processor configured to:
identify a set of obstacles within the area of investigation;
determine at least one of:
a direct visibility polygon as a polygonal region within the area comprising points which are in line of sight with the electromagnetic source;
a reflection visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been reflected by at least one of the obstacles;
a diffraction visibility polygon as a polygonal region within the area comprising points that may be reached by the radio wave after it has been diffracted by at least one of the obstacles;
wherein the direct visibility polygon, the reflection visibility polygon, and the diffraction visibility polygon are associated to respective values of the electric field computed therein; subdivide the area of investigation into a set of pixels; for each pixel of the set, determine if it belongs to at least one of the visibility polygons; and in affirmative of the determine, determine electric field strength at the considered pixel as a value proportional to the value of the electric field computed at the at least one visibility polygon. 25. The system according to claim 24, further comprising an output module configured to provide a map of the computed electric field strength inside the investigation area. 26. A non-transitory computer program product comprising computer-executable instructions for performing, when the program is run on a computer, the method according to claim 14. | 2,600 |
9,711 | 9,711 | 15,474,565 | 2,626 | Methods and systems are disclosed for using a head-mounted display that may consist of an image projector mounted to the head that projects one or more images onto a screen in front of one or both of the user's eyes. Moreover, head-mounted displays may also include electronics to track the position of the user's head. This tracking information can then be used as an input to change the display projected to the user—creating a Virtual Realty environment. Head tracking may be combined with transparent or semi-transparent display screens, to enable a user to see both a projected image and the physical world beyond the display screen. In certain embodiments, tracking information may be used to adjust the location of a projected image to compensate for the detected head movement. | 1. A head-mounted display, comprising:
a screen; a projector for projecting one or more images onto one or more mirrors; a controller for orienting the one or more mirrors to direct the one or more images onto one or more locations on the screen; one or more sensors for detecting movement of a head of a wearer of the head-mounted display; and wherein the controller is configured for orienting the one or more mirrors to redirect the one or more images to compensate for the detected head movement. 2. The head-mounted display of claim 1, wherein the projector is configured for rendering the one or more images in one or more frames. 3. The head-mounted display of claim 2, wherein the controller is configured to center the one or more mirrors between each of the one or more frames. 4. The head-mounted display of claim 1, wherein the projector is configured for projecting images on the back of the screen. 5. The head-mounted display of claim 1, wherein the projector is configured for projecting images on the front of the screen. 6. The head-mounted display of claim 1, wherein the one or more images comprise one or more labels and the one or more locations comprise one or more locations on the screen proximate to one or more objects viewable through the screen. 7. The head-mounted display of claim 1, wherein the screen is transparent. 8. The head-mounted display of claim 1, wherein the screen is semi-transparent. 9. The head-mounted display of claim 1, wherein the controller comprises a rotating actuator. 10. The head-mounted display of claim 1, wherein the controller comprises one or more actuators for orienting the one or more mirrors in one or more dimensions. 11. A method for compensating for head movement of a wearer of a head-mounted display, comprising:
providing a head-mounted display comprising a screen: projecting one or more images onto one or more mirrors; orienting the one or more mirrors to redirect the one or more images onto one or more locations on the screen; detecting movement of a head of a wearer of the head-mounted display; and orienting the one or more mirrors to redirect the one or more images to compensate for the detected head movement. 12. The method of claim 11, wherein the step of projecting one or more images comprises projecting one or more frames. 13. The method of claim 12, wherein the step of orienting the one or more mirrors comprises centering the one or more mirrors between each of the one or more frames. 14. The method of claim 11, wherein the one or more images comprise one or more labels and the one or more locations comprises one or more locations on the screen proximate one or more objects viewable through the screen. 15. The method of claim 11, wherein the screen is transparent. 16. The method of claim 11, wherein the screen is semi-transparent. | Methods and systems are disclosed for using a head-mounted display that may consist of an image projector mounted to the head that projects one or more images onto a screen in front of one or both of the user's eyes. Moreover, head-mounted displays may also include electronics to track the position of the user's head. This tracking information can then be used as an input to change the display projected to the user—creating a Virtual Realty environment. Head tracking may be combined with transparent or semi-transparent display screens, to enable a user to see both a projected image and the physical world beyond the display screen. In certain embodiments, tracking information may be used to adjust the location of a projected image to compensate for the detected head movement.1. A head-mounted display, comprising:
a screen; a projector for projecting one or more images onto one or more mirrors; a controller for orienting the one or more mirrors to direct the one or more images onto one or more locations on the screen; one or more sensors for detecting movement of a head of a wearer of the head-mounted display; and wherein the controller is configured for orienting the one or more mirrors to redirect the one or more images to compensate for the detected head movement. 2. The head-mounted display of claim 1, wherein the projector is configured for rendering the one or more images in one or more frames. 3. The head-mounted display of claim 2, wherein the controller is configured to center the one or more mirrors between each of the one or more frames. 4. The head-mounted display of claim 1, wherein the projector is configured for projecting images on the back of the screen. 5. The head-mounted display of claim 1, wherein the projector is configured for projecting images on the front of the screen. 6. The head-mounted display of claim 1, wherein the one or more images comprise one or more labels and the one or more locations comprise one or more locations on the screen proximate to one or more objects viewable through the screen. 7. The head-mounted display of claim 1, wherein the screen is transparent. 8. The head-mounted display of claim 1, wherein the screen is semi-transparent. 9. The head-mounted display of claim 1, wherein the controller comprises a rotating actuator. 10. The head-mounted display of claim 1, wherein the controller comprises one or more actuators for orienting the one or more mirrors in one or more dimensions. 11. A method for compensating for head movement of a wearer of a head-mounted display, comprising:
providing a head-mounted display comprising a screen: projecting one or more images onto one or more mirrors; orienting the one or more mirrors to redirect the one or more images onto one or more locations on the screen; detecting movement of a head of a wearer of the head-mounted display; and orienting the one or more mirrors to redirect the one or more images to compensate for the detected head movement. 12. The method of claim 11, wherein the step of projecting one or more images comprises projecting one or more frames. 13. The method of claim 12, wherein the step of orienting the one or more mirrors comprises centering the one or more mirrors between each of the one or more frames. 14. The method of claim 11, wherein the one or more images comprise one or more labels and the one or more locations comprises one or more locations on the screen proximate one or more objects viewable through the screen. 15. The method of claim 11, wherein the screen is transparent. 16. The method of claim 11, wherein the screen is semi-transparent. | 2,600 |
9,712 | 9,712 | 14,823,733 | 2,626 | Touch, multi-touch, gesture, flick and stylus pen input may be supported for remoted applications. For example, a touch capable client device may receive touch input for a remoted application executing on a server. In such an instance, the touch input may be transmitted to the server for processing. The server may subsequently modify the application display or the application functionality and provide an output to the client device. In some arrangements, the output may correspond to instructions for modifying a display of the application while in other examples, the output may correspond to an image of the changed application display. Additionally or alternatively, determining a functionality associated with touch input may be performed based on user definitions, user preferences, server definitions (e.g., operating system on the server), client definitions (e.g., operating system on the client) and the like and/or combinations thereof. Aspects may also include resolving latency and enhancing user experience using various features. | 1. One or more non-transitory computer-readable media storing executable instructions that, when executed by at least one processor, cause a system to:
receive a first touch input event from a remote computing device different from the system, wherein the first touch input event is responsive to a first application executing on the system, the first application being presented on the remote computing device; and receive a second touch input event from the remote computing device, wherein the second touch input event is responsive to a second application executing on the system, the second application being presented on the remote computing device, the second application being different from the first application, the first touch input event and the second touch input event being a same type of touch input event, wherein the second application interprets the second touch input event differently than the first application interprets the first touch input event. 2. The one or more non-transitory computer-readable media of claim 1, wherein the first touch input event includes a multi-touch input event. 3. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
receive, from a remote application client executing on the computing device, information relating to the first touch input event, wherein the remote application client is configured to coordinate communications between the system and the computing device for interacting with the first application. 4. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
receive information relating to the first touch input event from the computing device, the information including raw input data detected by one or more touch-sensitive hardware elements of the computing device. 5. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
determine whether the second touch input event is to be processed locally at the system or remotely at the computing device based on at least one of an application to which the second touch input event is directed and a type of the touch input event. 6. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
determine whether the second touch input event is to be processed locally at the system or remotely at the computing device based on at least one of an amount of available network bandwidth between the system and the computing device and a processing speed of the application. 7. The one or more non-transitory computer-readable media of claim 6, wherein the executable instructions, when executed by the at least one processor, cause the system to:
process the second touch input event locally at the system in response to at least one of:
the amount of available network bandwidth being above a specified bandwidth threshold; and
the processing speed of the application being above a speed threshold. 8. The one or more non-transitory computer-readable media of claim 6, wherein the executable instructions, when executed by the at least one processor, cause the system to:
store the second touch input event in a queue; determine an age of the second touch input event; and discard the second touch input event responsive to the age of the second touch input event reaching a specified age. 9. The one or more non-transitory computer-readable media of claim 8, wherein discarding the second touch input event responsive to the age of the second touch input event reaching the specified age comprises discarding a touch input event having an oldest creation date. 10. The one or more non-transitory computer-readable media of claim 8, wherein the executable instructions, when executed by the at least one processor, cause the system to:
discard all touch input events existing for a specified amount of time. 11. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
determine whether to process the second touch input event locally at the system or remotely at the computing device based on one or more rules; and cause the second touch input event to be processed remotely at the computing device in response to determining that the second touch input event is to be processed remotely, wherein processing of the second touch input event remotely at the computing device includes a modification of a display of the application. 12. The one or more non-transitory computer-readable media of claim 11, wherein the one or more rules include at least one rule that one or more specified types of touch inputs are to be processed remotely. 13. The one or more non-transitory computer-readable media of claim 11, wherein the one or more rules are stored by one or more of the system, the computing device, and the application. 14. The one or more non-transitory computer-readable media of claim 1, wherein the first application and the second application respectively define how to interpret a type of touch input. 15. A method comprising:
detecting, by a first computing device, a first touch input event directed to a first application executing on a second computing device; detecting, by the first computing device, a second touch input event directed to a second application executing on the second computing device, the second application being different from the first application, the first touch input event and the second touch input event being a same type of touch input event, wherein the first touch input event and the second touch input event are interpreted differently; determining, by the first computing device, an amount of network latency between the first computing device and the second computing device; determining whether the amount of network latency is above a specified threshold; and in response to determining that the amount of network latency is above the specified threshold, processing the first touch input event locally at the first computing device. 16. The method of claim 15, comprising:
when the touch input event is processed locally at the first computing device, transmitting information relating to the touch input event to the second computing device. 17. The method of claim 16, comprising:
receiving an application display update from the second computing device in response to transmitting the information relating to the touch input event to the second computing device. 18. A system comprising:
at least one processor; and non-transitory memory storing executable instructions that, when executed by the at least one processor, cause the system to:
receive a message comprising first touch input from a computing device different from the system, the first touch input being directed to a first application executing on the system;
generate a touch input notification message in a first message format configured to trigger a function call by the first application for processing the first touch input;
forward the touch input notification message to the first application executing on the system; and
receive a message comprising second touch input from the computing device, the second touch input being directed to a second application executing on the system, the second application being different from the first application, the first touch input directed to the first application and the second touch input directed to the second application being a same type of touch input,
wherein the second application interprets the second touch input differently than the first application interprets the first touch input. 19. The system of claim 18, wherein a function triggered by the function call by the first application for processing the first touch input is executed differently based on a type of the system. 20. The system of claim 18, wherein the non-transitory memory stores executable instructions that, when executed by the at least one processor, cause the system to:
transmit information related to the first touch input to the computing device; and receive an application display update from the computing device in response to transmitting the information related to the first touch input to the computing device. | Touch, multi-touch, gesture, flick and stylus pen input may be supported for remoted applications. For example, a touch capable client device may receive touch input for a remoted application executing on a server. In such an instance, the touch input may be transmitted to the server for processing. The server may subsequently modify the application display or the application functionality and provide an output to the client device. In some arrangements, the output may correspond to instructions for modifying a display of the application while in other examples, the output may correspond to an image of the changed application display. Additionally or alternatively, determining a functionality associated with touch input may be performed based on user definitions, user preferences, server definitions (e.g., operating system on the server), client definitions (e.g., operating system on the client) and the like and/or combinations thereof. Aspects may also include resolving latency and enhancing user experience using various features.1. One or more non-transitory computer-readable media storing executable instructions that, when executed by at least one processor, cause a system to:
receive a first touch input event from a remote computing device different from the system, wherein the first touch input event is responsive to a first application executing on the system, the first application being presented on the remote computing device; and receive a second touch input event from the remote computing device, wherein the second touch input event is responsive to a second application executing on the system, the second application being presented on the remote computing device, the second application being different from the first application, the first touch input event and the second touch input event being a same type of touch input event, wherein the second application interprets the second touch input event differently than the first application interprets the first touch input event. 2. The one or more non-transitory computer-readable media of claim 1, wherein the first touch input event includes a multi-touch input event. 3. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
receive, from a remote application client executing on the computing device, information relating to the first touch input event, wherein the remote application client is configured to coordinate communications between the system and the computing device for interacting with the first application. 4. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
receive information relating to the first touch input event from the computing device, the information including raw input data detected by one or more touch-sensitive hardware elements of the computing device. 5. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
determine whether the second touch input event is to be processed locally at the system or remotely at the computing device based on at least one of an application to which the second touch input event is directed and a type of the touch input event. 6. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
determine whether the second touch input event is to be processed locally at the system or remotely at the computing device based on at least one of an amount of available network bandwidth between the system and the computing device and a processing speed of the application. 7. The one or more non-transitory computer-readable media of claim 6, wherein the executable instructions, when executed by the at least one processor, cause the system to:
process the second touch input event locally at the system in response to at least one of:
the amount of available network bandwidth being above a specified bandwidth threshold; and
the processing speed of the application being above a speed threshold. 8. The one or more non-transitory computer-readable media of claim 6, wherein the executable instructions, when executed by the at least one processor, cause the system to:
store the second touch input event in a queue; determine an age of the second touch input event; and discard the second touch input event responsive to the age of the second touch input event reaching a specified age. 9. The one or more non-transitory computer-readable media of claim 8, wherein discarding the second touch input event responsive to the age of the second touch input event reaching the specified age comprises discarding a touch input event having an oldest creation date. 10. The one or more non-transitory computer-readable media of claim 8, wherein the executable instructions, when executed by the at least one processor, cause the system to:
discard all touch input events existing for a specified amount of time. 11. The one or more non-transitory computer-readable media of claim 1, wherein the executable instructions, when executed by the at least one processor, cause the system to:
determine whether to process the second touch input event locally at the system or remotely at the computing device based on one or more rules; and cause the second touch input event to be processed remotely at the computing device in response to determining that the second touch input event is to be processed remotely, wherein processing of the second touch input event remotely at the computing device includes a modification of a display of the application. 12. The one or more non-transitory computer-readable media of claim 11, wherein the one or more rules include at least one rule that one or more specified types of touch inputs are to be processed remotely. 13. The one or more non-transitory computer-readable media of claim 11, wherein the one or more rules are stored by one or more of the system, the computing device, and the application. 14. The one or more non-transitory computer-readable media of claim 1, wherein the first application and the second application respectively define how to interpret a type of touch input. 15. A method comprising:
detecting, by a first computing device, a first touch input event directed to a first application executing on a second computing device; detecting, by the first computing device, a second touch input event directed to a second application executing on the second computing device, the second application being different from the first application, the first touch input event and the second touch input event being a same type of touch input event, wherein the first touch input event and the second touch input event are interpreted differently; determining, by the first computing device, an amount of network latency between the first computing device and the second computing device; determining whether the amount of network latency is above a specified threshold; and in response to determining that the amount of network latency is above the specified threshold, processing the first touch input event locally at the first computing device. 16. The method of claim 15, comprising:
when the touch input event is processed locally at the first computing device, transmitting information relating to the touch input event to the second computing device. 17. The method of claim 16, comprising:
receiving an application display update from the second computing device in response to transmitting the information relating to the touch input event to the second computing device. 18. A system comprising:
at least one processor; and non-transitory memory storing executable instructions that, when executed by the at least one processor, cause the system to:
receive a message comprising first touch input from a computing device different from the system, the first touch input being directed to a first application executing on the system;
generate a touch input notification message in a first message format configured to trigger a function call by the first application for processing the first touch input;
forward the touch input notification message to the first application executing on the system; and
receive a message comprising second touch input from the computing device, the second touch input being directed to a second application executing on the system, the second application being different from the first application, the first touch input directed to the first application and the second touch input directed to the second application being a same type of touch input,
wherein the second application interprets the second touch input differently than the first application interprets the first touch input. 19. The system of claim 18, wherein a function triggered by the function call by the first application for processing the first touch input is executed differently based on a type of the system. 20. The system of claim 18, wherein the non-transitory memory stores executable instructions that, when executed by the at least one processor, cause the system to:
transmit information related to the first touch input to the computing device; and receive an application display update from the computing device in response to transmitting the information related to the first touch input to the computing device. | 2,600 |
9,713 | 9,713 | 13,772,739 | 2,687 | A patient care system described herein includes a first electromagnetic coupler associated with the patient, at least one patient-centric appliance in communication with the first coupler, and an occupant support for supporting the patient. The occupant support has a second electromagnetic coupler associated therewith. At least one of the couplers is connectable to an electrical energy source for energizing the coupler. The first and second couplers form a noncontact electromagnetic coupling. An occupant wearable item and an occupant support, both of which are useable with the patient care system, are also described. | 1. A patient care system comprising:
a first electromagnetic coupler associated with the patient; at least one patient-centric appliance in communication with the first coupler; an occupant support for supporting the patient, the occupant support having second electromagnetic coupler associated therewith, at least one of the couplers being connectable to an electrical energy source for energizing the coupler, the first and second couplers forming a noncontact electromagnetic coupling. 2. The system of claim 1 wherein the couplers are capacitive couplers and form a capacitive coupling. 3. The system of claim 1 wherein the couplers are inductive couplers and form an inductive coupling. 4. The system of claim 1 wherein the coupling is adapted to convey noninformational electrical energy. 5. The system of claim 1 wherein the coupling is adapted to convey information. 6. The system of claim 1 wherein the coupling is adapted to convey noninformational electrical energy and information. 7. The system of claim 1 including a communication module adapted to communicate with a patient nonspecific database to retrieve patient specific information. 8. The system of claim 7 wherein the communication module is a component of the occupant support. 9. The system of claim 1 wherein only the second electromagnetic coupler is connectable to the electrical energy source and the electromagnetic coupling conveys noninformational electrical energy, information, or both from the occupant support to the first coupler. 10. The system of claim 1 wherein the patient-centric appliance and the first coupler are components of a patient wearable item. 11. The system of claim 1 wherein the patient-centric appliance is selected from the group consisting of a physiological sensor, a memory device, a user interface device and a display. 12. The system of claim 1 including a patient wearable garment which hosts the patient-centric appliance and the first coupler. 13. The system of claim 12 wherein the patient-centric appliance is selected from the group consisting of a physiological sensor, a memory device, a user interface device and a display. 14. An occupant wearable item comprising a substrate, a first electromagnetic coupler bound to the substrate and at least one occupant-centric appliance in communication with the first coupler, the first coupler being adapted to form a noncontact electromagnetic coupling with a second noncontact electromagnetic coupler which is not bound to the substrate. 15. The item of claim 14 wherein the item is a sleepwear garment and wherein the coupler and the appliance are adhered to the garment. 16. The item of claim 14 wherein the item is a sleepwear garment and wherein the coupler and the appliance are stitched to the garment. 17. The item of claim 14 wherein the item is a sleepwear garment and wherein the coupler and the appliance reside in a pocket of the garment. 18. The item of claim 14 wherein the first coupler is a capacitive coupler adapted to form a capacitive coupling with the second coupler. 19. The item of claim 14 wherein the first coupler is an inductive coupler adapted to form an inductive coupling with the second coupler. 20. The item of claim 14 wherein the coupling is adapted to convey noninformational electrical energy. 21. The item of claim 14 wherein the coupling is adapted to convey information. 22. The item of claim 14 wherein the coupling is adapted to convey noninformational electrical energy and information. 23. The item of claim 14 wherein the occupant-centric appliance is selected from the group consisting of a physiological sensor, a memory device, a user interface device and a display. 24. An occupant support which includes a second electromagnetic coupler adapted to form a noncontact electromagnetic coupling with a first electromagnetic coupler associated with an occupant of the occupant support, and a control module adapted to receive information conveyed across the coupling and to issue a signal to configure the occupant support for the occupant in response to the conveyed information. 25. The occupant support of claim 24 wherein the issued signal commands a change in an existing state of the occupant support. 26. The occupant support of claim 24 wherein the issued signal causes a constraint to be imposed on an otherwise achievable state of the occupant support. 27. The occupant support of claim 24 wherein the second coupler is a capacitive coupler adapted to form a capacitive coupling with the first coupler. 28. The occupant support of claim 24 wherein the second coupler is an inductive coupler adapted to form an inductive coupling with the first coupler. 29. The occupant support of claim 24 including a communication module adapted to communicate with a patient nonspecific database to retrieve patient specific information. 30. The occupant support of claim 24 wherein only the second electromagnetic coupler is connectable to an electrical energy source and the electromagnetic coupling conveys noninformational electrical energy, information, or both. | A patient care system described herein includes a first electromagnetic coupler associated with the patient, at least one patient-centric appliance in communication with the first coupler, and an occupant support for supporting the patient. The occupant support has a second electromagnetic coupler associated therewith. At least one of the couplers is connectable to an electrical energy source for energizing the coupler. The first and second couplers form a noncontact electromagnetic coupling. An occupant wearable item and an occupant support, both of which are useable with the patient care system, are also described.1. A patient care system comprising:
a first electromagnetic coupler associated with the patient; at least one patient-centric appliance in communication with the first coupler; an occupant support for supporting the patient, the occupant support having second electromagnetic coupler associated therewith, at least one of the couplers being connectable to an electrical energy source for energizing the coupler, the first and second couplers forming a noncontact electromagnetic coupling. 2. The system of claim 1 wherein the couplers are capacitive couplers and form a capacitive coupling. 3. The system of claim 1 wherein the couplers are inductive couplers and form an inductive coupling. 4. The system of claim 1 wherein the coupling is adapted to convey noninformational electrical energy. 5. The system of claim 1 wherein the coupling is adapted to convey information. 6. The system of claim 1 wherein the coupling is adapted to convey noninformational electrical energy and information. 7. The system of claim 1 including a communication module adapted to communicate with a patient nonspecific database to retrieve patient specific information. 8. The system of claim 7 wherein the communication module is a component of the occupant support. 9. The system of claim 1 wherein only the second electromagnetic coupler is connectable to the electrical energy source and the electromagnetic coupling conveys noninformational electrical energy, information, or both from the occupant support to the first coupler. 10. The system of claim 1 wherein the patient-centric appliance and the first coupler are components of a patient wearable item. 11. The system of claim 1 wherein the patient-centric appliance is selected from the group consisting of a physiological sensor, a memory device, a user interface device and a display. 12. The system of claim 1 including a patient wearable garment which hosts the patient-centric appliance and the first coupler. 13. The system of claim 12 wherein the patient-centric appliance is selected from the group consisting of a physiological sensor, a memory device, a user interface device and a display. 14. An occupant wearable item comprising a substrate, a first electromagnetic coupler bound to the substrate and at least one occupant-centric appliance in communication with the first coupler, the first coupler being adapted to form a noncontact electromagnetic coupling with a second noncontact electromagnetic coupler which is not bound to the substrate. 15. The item of claim 14 wherein the item is a sleepwear garment and wherein the coupler and the appliance are adhered to the garment. 16. The item of claim 14 wherein the item is a sleepwear garment and wherein the coupler and the appliance are stitched to the garment. 17. The item of claim 14 wherein the item is a sleepwear garment and wherein the coupler and the appliance reside in a pocket of the garment. 18. The item of claim 14 wherein the first coupler is a capacitive coupler adapted to form a capacitive coupling with the second coupler. 19. The item of claim 14 wherein the first coupler is an inductive coupler adapted to form an inductive coupling with the second coupler. 20. The item of claim 14 wherein the coupling is adapted to convey noninformational electrical energy. 21. The item of claim 14 wherein the coupling is adapted to convey information. 22. The item of claim 14 wherein the coupling is adapted to convey noninformational electrical energy and information. 23. The item of claim 14 wherein the occupant-centric appliance is selected from the group consisting of a physiological sensor, a memory device, a user interface device and a display. 24. An occupant support which includes a second electromagnetic coupler adapted to form a noncontact electromagnetic coupling with a first electromagnetic coupler associated with an occupant of the occupant support, and a control module adapted to receive information conveyed across the coupling and to issue a signal to configure the occupant support for the occupant in response to the conveyed information. 25. The occupant support of claim 24 wherein the issued signal commands a change in an existing state of the occupant support. 26. The occupant support of claim 24 wherein the issued signal causes a constraint to be imposed on an otherwise achievable state of the occupant support. 27. The occupant support of claim 24 wherein the second coupler is a capacitive coupler adapted to form a capacitive coupling with the first coupler. 28. The occupant support of claim 24 wherein the second coupler is an inductive coupler adapted to form an inductive coupling with the first coupler. 29. The occupant support of claim 24 including a communication module adapted to communicate with a patient nonspecific database to retrieve patient specific information. 30. The occupant support of claim 24 wherein only the second electromagnetic coupler is connectable to an electrical energy source and the electromagnetic coupling conveys noninformational electrical energy, information, or both. | 2,600 |
9,714 | 9,714 | 14,434,314 | 2,647 | A node of a wireless communication network ( 150; 160; 170; 210; 220; 230; 240 ) receives an indication from a user equipment ( 10 ). The indication indicates that a limitation of volume charged packet data services is required for the user equipment ( 10 ). Depending on the received indication, the node ( 150; 160; 170; 210; 220; 230; 240 ) prevents transmission of packet data associated with the volume charged packet data services and allows transmission of packet data associated with one or more other packet data services. | 1. A method of controlling packet data connectivity in a wireless communication network, the method comprising:
a node of the wireless communication network receiving, from a user equipment, an indication that a limitation of volume charged packet data services is required for the user equipment; and depending on the received indication, the node preventing transmission of packet data traffic associated with the volume charged packet data services and allowing transmission of packet data traffic associated with one or more other packet data services. 2. The method according to claim 1, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and wherein the indication further indicates one or more access networks in which the indication is applicable. 3-4. (canceled) 5. The method of claim 1, further comprising:
the node configuring at least one packet filter which blocks the packet data traffic associated with the volume charged packet data services and passes the packet data traffic associated with the one or more other packet data services, wherein the at least one packet filter operates in the user equipment. 6-8. (canceled) 9. The method according to claim 5, wherein the node receives the indication in one of (A) a procedure for attaching the user equipment to the wireless communication network and (B) a procedure for configuring packet data network connectivity between the user equipment and the wireless communication network. 10-12. (canceled) 13. The method according to claim 1, wherein the node prevents the transmission of the packet data traffic associated with the at least one volume charged packet data service by at least one of (A) inactivating a media component of the at least one of the volume charged packet data service and (B) rejecting a request associated with the at least one volume charged packet data service. 14. (canceled) 15. The method according to claim 13, wherein the node receives the indication in a procedure for registering the user equipment for the at least one volume charged packet data service. 16. (canceled) 17. A method of controlling packet data connectivity in a wireless communication network, the method comprising:
a user equipment detecting a command to switch off packet data connectivity of the user equipment, in response to the command, the user equipment sending, to the wireless communication network, an indication that a limitation of volume charged packet data services is required for the user equipment. 18. The method according to claim 17, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and wherein the indication further indicates one or more access networks in which the indication is applicable. 19-20. (canceled) 21. The method according to claim 17, wherein the user equipment sends the indication in at least one of (A) a procedure for attaching the user equipment to the wireless communication network (B) a procedure for configuring packet data network connectivity between the user equipment and the wireless communication, and (C) a procedure for registering the user equipment for at least one of the volume charged packet data services. 22-23. (canceled) 24. The method according to claim 17, further comprising:
in response to sending the indication, the user equipment receiving an indication of a packet filter; and the user equipment installing the packet filter, the packet filter blocking the packet data traffic associated with the volume charged packet data services and passing the packet data traffic associated with the one or more other packet data services. 25. (canceled) 26. A node for a cellular network, the node comprising:
at least one interface; and at least one processor, the at least one processor being configured to:
receive, from a user equipment, an indication that a limitation of volume charged packet data services is required for the user equipment; and
depending on the received indication, prevent transmission of packet data associated with the volume charged packet data services and allowing transmission of packet data associated with one or more other packet data services. 27. The node according to claim 26, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and further indicates one or more access networks in which the indication is applicable. 28-29. (canceled) 30. The node according to claim 26, wherein the at least one processor is configured to configure at least one packet filter which blocks the packet data traffic associated with the volume charged packet data services and passes the packet data traffic associated with the one or more other packet data services, and wherein the packet filter operates in the user equipment. 31. The node according to claim 30, wherein the packet filter operates in at least one of (A) the node and (B) a further node of the wireless communication network. 32-33. (canceled) 34. The node according to claim 30, wherein the at least one processor is configured to receive the indication in one of (A) a procedure for attaching the user equipment to the wireless communication network, and (B) a procedure for configuring packet data network connectivity between the user equipment and the wireless communication network. 35. (canceled) 36. The node according to claim 26, wherein the node is one of (A) an application server which is responsible for providing at least one of the volume charged packet data services and (B) a control server which is responsible for authorizing at least one of the volume charged packet data services. 37. (canceled) 38. The node according to claim 36, wherein the at least one processor is configured to prevent the transmission of the packet data traffic associated with the at least one volume charged packet data service by at least one of (A) inactivating at a media component of the at least one of the volume charged service and (B) rejecting a request associated with the at least one volume charged packet data service. 39. (canceled) 40. The node according to claim 36, wherein the at least one processor is configured to receive the indication in a procedure for registering the user equipment for the at least one volume charged packet data service. 41-42. (canceled) 43. A user equipment, comprising:
an interface for connecting to a wireless communication network; and at least one processor, the at least one processor being configured to:
detect a command to switch off packet data connectivity of the user equipment, and
in response to the command, send to the wireless communication network an indication that a limitation of volume charged packet data services is required for the user equipment. 44. The user equipment according to claim 43, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and wherein the indication further indicates one or more access networks in which the indication is applicable. 45-46. (canceled) 47. The user equipment according to claim 43, wherein the at least one processor is configured to send the indication in a procedure for at least one of (A) attaching the user equipment to the wireless communication network, (B) configuring packet data network connectivity between the user equipment and the wireless communication network, and (C) registering the user equipment for at least one of the volume charged packet data services. 48-49. (canceled) 50. The user equipment according to claim 43, wherein the at least one processor is configured to:
in response to sending the indication, receive an indication of a packet filter, and install the packet filter, the packet filter blocking the packet data traffic associated with the volume charged packet data services and passes the packet data traffic associated with the one or more other packet data services. 51-56. (canceled) | A node of a wireless communication network ( 150; 160; 170; 210; 220; 230; 240 ) receives an indication from a user equipment ( 10 ). The indication indicates that a limitation of volume charged packet data services is required for the user equipment ( 10 ). Depending on the received indication, the node ( 150; 160; 170; 210; 220; 230; 240 ) prevents transmission of packet data associated with the volume charged packet data services and allows transmission of packet data associated with one or more other packet data services.1. A method of controlling packet data connectivity in a wireless communication network, the method comprising:
a node of the wireless communication network receiving, from a user equipment, an indication that a limitation of volume charged packet data services is required for the user equipment; and depending on the received indication, the node preventing transmission of packet data traffic associated with the volume charged packet data services and allowing transmission of packet data traffic associated with one or more other packet data services. 2. The method according to claim 1, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and wherein the indication further indicates one or more access networks in which the indication is applicable. 3-4. (canceled) 5. The method of claim 1, further comprising:
the node configuring at least one packet filter which blocks the packet data traffic associated with the volume charged packet data services and passes the packet data traffic associated with the one or more other packet data services, wherein the at least one packet filter operates in the user equipment. 6-8. (canceled) 9. The method according to claim 5, wherein the node receives the indication in one of (A) a procedure for attaching the user equipment to the wireless communication network and (B) a procedure for configuring packet data network connectivity between the user equipment and the wireless communication network. 10-12. (canceled) 13. The method according to claim 1, wherein the node prevents the transmission of the packet data traffic associated with the at least one volume charged packet data service by at least one of (A) inactivating a media component of the at least one of the volume charged packet data service and (B) rejecting a request associated with the at least one volume charged packet data service. 14. (canceled) 15. The method according to claim 13, wherein the node receives the indication in a procedure for registering the user equipment for the at least one volume charged packet data service. 16. (canceled) 17. A method of controlling packet data connectivity in a wireless communication network, the method comprising:
a user equipment detecting a command to switch off packet data connectivity of the user equipment, in response to the command, the user equipment sending, to the wireless communication network, an indication that a limitation of volume charged packet data services is required for the user equipment. 18. The method according to claim 17, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and wherein the indication further indicates one or more access networks in which the indication is applicable. 19-20. (canceled) 21. The method according to claim 17, wherein the user equipment sends the indication in at least one of (A) a procedure for attaching the user equipment to the wireless communication network (B) a procedure for configuring packet data network connectivity between the user equipment and the wireless communication, and (C) a procedure for registering the user equipment for at least one of the volume charged packet data services. 22-23. (canceled) 24. The method according to claim 17, further comprising:
in response to sending the indication, the user equipment receiving an indication of a packet filter; and the user equipment installing the packet filter, the packet filter blocking the packet data traffic associated with the volume charged packet data services and passing the packet data traffic associated with the one or more other packet data services. 25. (canceled) 26. A node for a cellular network, the node comprising:
at least one interface; and at least one processor, the at least one processor being configured to:
receive, from a user equipment, an indication that a limitation of volume charged packet data services is required for the user equipment; and
depending on the received indication, prevent transmission of packet data associated with the volume charged packet data services and allowing transmission of packet data associated with one or more other packet data services. 27. The node according to claim 26, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and further indicates one or more access networks in which the indication is applicable. 28-29. (canceled) 30. The node according to claim 26, wherein the at least one processor is configured to configure at least one packet filter which blocks the packet data traffic associated with the volume charged packet data services and passes the packet data traffic associated with the one or more other packet data services, and wherein the packet filter operates in the user equipment. 31. The node according to claim 30, wherein the packet filter operates in at least one of (A) the node and (B) a further node of the wireless communication network. 32-33. (canceled) 34. The node according to claim 30, wherein the at least one processor is configured to receive the indication in one of (A) a procedure for attaching the user equipment to the wireless communication network, and (B) a procedure for configuring packet data network connectivity between the user equipment and the wireless communication network. 35. (canceled) 36. The node according to claim 26, wherein the node is one of (A) an application server which is responsible for providing at least one of the volume charged packet data services and (B) a control server which is responsible for authorizing at least one of the volume charged packet data services. 37. (canceled) 38. The node according to claim 36, wherein the at least one processor is configured to prevent the transmission of the packet data traffic associated with the at least one volume charged packet data service by at least one of (A) inactivating at a media component of the at least one of the volume charged service and (B) rejecting a request associated with the at least one volume charged packet data service. 39. (canceled) 40. The node according to claim 36, wherein the at least one processor is configured to receive the indication in a procedure for registering the user equipment for the at least one volume charged packet data service. 41-42. (canceled) 43. A user equipment, comprising:
an interface for connecting to a wireless communication network; and at least one processor, the at least one processor being configured to:
detect a command to switch off packet data connectivity of the user equipment, and
in response to the command, send to the wireless communication network an indication that a limitation of volume charged packet data services is required for the user equipment. 44. The user equipment according to claim 43, wherein the indication indicates one of (A) that none of the volume charged packet data services are allowed for the user equipment and (B) that none of the volume charged packet data services are allowed while the user equipment is roaming, and wherein the indication further indicates one or more access networks in which the indication is applicable. 45-46. (canceled) 47. The user equipment according to claim 43, wherein the at least one processor is configured to send the indication in a procedure for at least one of (A) attaching the user equipment to the wireless communication network, (B) configuring packet data network connectivity between the user equipment and the wireless communication network, and (C) registering the user equipment for at least one of the volume charged packet data services. 48-49. (canceled) 50. The user equipment according to claim 43, wherein the at least one processor is configured to:
in response to sending the indication, receive an indication of a packet filter, and install the packet filter, the packet filter blocking the packet data traffic associated with the volume charged packet data services and passes the packet data traffic associated with the one or more other packet data services. 51-56. (canceled) | 2,600 |
9,715 | 9,715 | 15,408,669 | 2,657 | A voice keyword can be used to login/switch into a personalized experience on an electronic device. Both the speech of the keyword may be recognized and the audible fingerprint of the keyword can be recognized to match the fingerprint to a specific person. | 1. A device comprising:
at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor to: receive digitized voice input; execute speech recognition on the digitized voice input to render a speech result indicating at least one word; execute speaker recognition on the digitized voice input to render a speaker result indicating at east one person; responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, establish on a display device at least one setting associated with the at least one person; responsive to the speech result satisfying the first criteria and the speaker result not satisfying the second criteria, change no setting on the display device; responsive to the speech result not satisfying the first criteria and the speaker result satisfying the second criteria, change no setting on the display device; and wherein the first criteria includes a match with a correct passcode, and the instructions are executable to, responsive to receiving a correct passcode from a correct associated speaker at least “N” times within a time period, stop changing settings on the display device for a timeout period regardless of whether a correct passcode is received from a correct associated speaker, N being an integer greater than one. 2. The device of claim 1, wherein the at least one setting includes an icon arrangement on a home page. 3. The device of claim 1, wherein the at least one setting includes a ratings limitation. 4. The device of claim 1, wherein the instructions are executable to:
responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, present on the display device a welcome message. 5. (canceled) 6. The device of claim 1, wherein the second criteria includes a match with a stored speaker template. 7, 8. (canceled) 9. A computer-implemented method, comprising:
executing speech recognition on digitized voice input to render a speech result indicating at least one word; executing speaker recognition on the digitized voice input to render a speaker result indicating at least one person; responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, establishing on the computing device at least one setting associated with the at least one person and otherwise not changing the at least one setting; and responsive to receiving a correct passcode from a correct associated speaker at least “N” times within a time period, stop changing settings on the computing device for a timeout period. 10. The method of claim 9, comprising:
responsive to the speech result satisfying the first criteria and the speaker result not satisfying the second criteria, changing no setting on the computing device; and responsive to the speech result not satisfying the first criteria and the speaker result satisfying the second criteria, changing no setting on the computing device. 11. The method of claim 9, comprising:
responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, presenting on the computing device a welcome message. 12. (canceled) 13. An apparatus, comprising:
at least one processor; at least one display controllable by the at least one processor; and at least one storage comprising instructions executable by the at least one processor for: receiving first digitized voice input from first and second people at a first time of day; executing speech recognition on the first digitized voice input to render a first multi-person speech result; responsive to the first multi-person speech result satisfying a first criteria and responsive to the first time of day, establishing on the display a first multi-person personalization for the first and second people; receiving second digitized voice input from the first and second people at a second time of day; executing speech recognition on the second digitized voice input to render a second multi-person speech result; responsive to the second multi-person speech result satisfying the first criteria and responsive to the second time of day, establishing on the display a second multi-person personalization for the first and second people, the second multi-person personalization being different from the first multi-person personalization, wherein at least one feature of at least the first multi-person personalization includes an icon arrangement on a home page. 14. (canceled) 15. The apparatus of claim 13, wherein at least one feature of at least the first multi-person personalization includes a ratings limitation. 16. The apparatus of claim 13, wherein the instructions are executable for:
responsive to the first speech result satisfying a first criteria and a speaker result satisfying a second criteria, presenting on the display a welcome message. 17. The apparatus of claim 13, wherein the first criteria includes a match with a correct passcode. 18. The apparatus of claim 16, wherein the second criteria includes a match with a stored speaker template. 19. The apparatus of claim 17, wherein the instructions are executable to:
responsive to receiving a correct passcode from a correct associated speaker at least “N” times within a time period, stop changing settings on the display for a timeout period. | A voice keyword can be used to login/switch into a personalized experience on an electronic device. Both the speech of the keyword may be recognized and the audible fingerprint of the keyword can be recognized to match the fingerprint to a specific person.1. A device comprising:
at least one computer memory that is not a transitory signal and that comprises instructions executable by at least one processor to: receive digitized voice input; execute speech recognition on the digitized voice input to render a speech result indicating at least one word; execute speaker recognition on the digitized voice input to render a speaker result indicating at east one person; responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, establish on a display device at least one setting associated with the at least one person; responsive to the speech result satisfying the first criteria and the speaker result not satisfying the second criteria, change no setting on the display device; responsive to the speech result not satisfying the first criteria and the speaker result satisfying the second criteria, change no setting on the display device; and wherein the first criteria includes a match with a correct passcode, and the instructions are executable to, responsive to receiving a correct passcode from a correct associated speaker at least “N” times within a time period, stop changing settings on the display device for a timeout period regardless of whether a correct passcode is received from a correct associated speaker, N being an integer greater than one. 2. The device of claim 1, wherein the at least one setting includes an icon arrangement on a home page. 3. The device of claim 1, wherein the at least one setting includes a ratings limitation. 4. The device of claim 1, wherein the instructions are executable to:
responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, present on the display device a welcome message. 5. (canceled) 6. The device of claim 1, wherein the second criteria includes a match with a stored speaker template. 7, 8. (canceled) 9. A computer-implemented method, comprising:
executing speech recognition on digitized voice input to render a speech result indicating at least one word; executing speaker recognition on the digitized voice input to render a speaker result indicating at least one person; responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, establishing on the computing device at least one setting associated with the at least one person and otherwise not changing the at least one setting; and responsive to receiving a correct passcode from a correct associated speaker at least “N” times within a time period, stop changing settings on the computing device for a timeout period. 10. The method of claim 9, comprising:
responsive to the speech result satisfying the first criteria and the speaker result not satisfying the second criteria, changing no setting on the computing device; and responsive to the speech result not satisfying the first criteria and the speaker result satisfying the second criteria, changing no setting on the computing device. 11. The method of claim 9, comprising:
responsive to the speech result satisfying a first criteria and the speaker result satisfying a second criteria, presenting on the computing device a welcome message. 12. (canceled) 13. An apparatus, comprising:
at least one processor; at least one display controllable by the at least one processor; and at least one storage comprising instructions executable by the at least one processor for: receiving first digitized voice input from first and second people at a first time of day; executing speech recognition on the first digitized voice input to render a first multi-person speech result; responsive to the first multi-person speech result satisfying a first criteria and responsive to the first time of day, establishing on the display a first multi-person personalization for the first and second people; receiving second digitized voice input from the first and second people at a second time of day; executing speech recognition on the second digitized voice input to render a second multi-person speech result; responsive to the second multi-person speech result satisfying the first criteria and responsive to the second time of day, establishing on the display a second multi-person personalization for the first and second people, the second multi-person personalization being different from the first multi-person personalization, wherein at least one feature of at least the first multi-person personalization includes an icon arrangement on a home page. 14. (canceled) 15. The apparatus of claim 13, wherein at least one feature of at least the first multi-person personalization includes a ratings limitation. 16. The apparatus of claim 13, wherein the instructions are executable for:
responsive to the first speech result satisfying a first criteria and a speaker result satisfying a second criteria, presenting on the display a welcome message. 17. The apparatus of claim 13, wherein the first criteria includes a match with a correct passcode. 18. The apparatus of claim 16, wherein the second criteria includes a match with a stored speaker template. 19. The apparatus of claim 17, wherein the instructions are executable to:
responsive to receiving a correct passcode from a correct associated speaker at least “N” times within a time period, stop changing settings on the display for a timeout period. | 2,600 |
9,716 | 9,716 | 14,842,528 | 2,677 | There is provided a system for keyword recognition comprising a memory storing a keyword recognition application, a processor executing the keyword recognition application to receive a digitized speech from an analog-to-digital (A/D) converter, divide the digitized speech into a plurality of speech segments having a first speech segment, calculate a first probability of distribution of a first keyword in the first speech segment, determine that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword, calculate a second probability of distribution of a second keyword in the first speech segment, and determine that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword. | 1. A system for keyword recognition, the system comprising:
a microphone configured to receive an input speech; an analog-to-digital (A/D) converter configured to convert the input speech from an analog form to a digital form and generate a digitized speech; a memory storing a keyword recognition application; a hardware processor executing the keyword recognition application to:
receive the digitized speech from the A/D converter;
divide the digitized speech into a plurality of speech segments having a first speech segment;
calculate a first probability of distribution of a first keyword in the first speech segment;
determine that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword;
calculate a second probability of distribution of a second keyword in the first speech segment; and
determine that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword. 2. The system of claim 1, wherein the first keyword at least partially overlaps the second keyword in the first speech segment. 3. The system of claim 1, wherein at least one of the first threshold and the second threshold is calibrated for high precision detection of the first keyword. 4. The system of claim 1, wherein at least one of the first threshold and the second threshold is calibrated for high recall detection of the first keyword. 5. The system of claim 1, wherein the plurality of speech segments include sliding window segments. 6. The system of claim 1, wherein, after determining the first speech segment includes the first keyword, the hardware processor is further configured to execute a first action associated with the first keyword. 7. The system of claim 1, wherein, after determining the first speech segment includes the second keyword, the hardware processor is further configured to execute a second action associated with the second keyword. 8. The system of claim 1, wherein at least one of the first keyword and the second keyword is a command for a game. 9. The system of claim 1, wherein the input speech includes speech from a first user and speech from a second user. 10. The system of claim 9, wherein the first user speaks the first keyword and the second user speaks the second keyword. 11. A method of keyword recognition, for use with a system having a microphone, an analog-to-digital (A/D) converter, a memory including a keyword recognition application, and a hardware processor, the method comprising:
receiving, using the hardware processor, a digitized speech from the A/D converter; dividing, using the hardware processor, the digitized speech into a plurality of speech segments having a first speech segment; calculating, using the hardware processor, a first probability of distribution of a first keyword in the first speech segment; determining, using the hardware processor, that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword; calculating, using the hardware processor, a second probability of distribution of a second keyword in the first speech segment; and determining, using the hardware processor, that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword. 12. The method of claim 11, wherein the first keyword at least partially overlaps the second keyword in the first speech segment. 13. The method of claim 11, wherein the first threshold is calibrated for high precision detection of the first keyword. 14. The method of claim 11, wherein the first threshold is calibrated for high recall detection of the first keyword. 15. The method of claim 11, wherein the plurality of speech segments include sliding window segments. 16. The method of claim 11, further comprising:
executing, using the processor, a first action associated with the first keyword if the first keyword is recognized. 17. The method of claim 11, further comprising:
executing, using the processor, a second action associated with the second keyword if the second keyword is recognized. 18. The method of claim 11, wherein the at least one of the first keyword and the second keyword is a command for a game. 19. The method of claim 11, wherein the input speech includes speech from a first user and speech from a second user, and wherein the first user speaks the first keyword and the second user speaks the second keyword. 20. A system for keyword recognition, the system comprising:
a microphone configured to receive an input speech; an analog-to-digital (A/D) converter configured to convert the input speech from an analog form to a digital form and generate a digitized speech; a memory storing a keyword recognition application; a hardware processor executing the keyword recognition application to:
receive the digitized speech from the A/D converter;
divide the digitized speech into a plurality of speech segments, including a first speech segment including a plurality of keywords and background, wherein background includes portions of the first speech segment that do not contain keywords;
convert the first speech segment to a feature vector sequence;
model a plurality of keyword probability distributions from the feature vector sequence, wherein each keyword probability distribution of the plurality of keyword probability distributions corresponds to a keyword of the plurality of keywords;
model a background probability distribution from the feature vector sequence;
model the first speech segment as a combination of a plurality of keyword vectors and a plurality of background vectors;
model a speech segment probability distribution as a mixture of the plurality of keyword probability distributions and the background probability distribution;
estimate a plurality of keyword mixture weights corresponding to the plurality of keyword probability distributions and a background mixture weight corresponding to the background probability distribution using an any maximum-likelihood technique;
equate each keyword mixture weight of the plurality of keyword mixture weights to a corresponding plurality of probabilities of each keyword of the plurality of keywords and to a corresponding plurality of fractions of the first speech segment that contain each keyword of the plurality of keywords. | There is provided a system for keyword recognition comprising a memory storing a keyword recognition application, a processor executing the keyword recognition application to receive a digitized speech from an analog-to-digital (A/D) converter, divide the digitized speech into a plurality of speech segments having a first speech segment, calculate a first probability of distribution of a first keyword in the first speech segment, determine that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword, calculate a second probability of distribution of a second keyword in the first speech segment, and determine that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword.1. A system for keyword recognition, the system comprising:
a microphone configured to receive an input speech; an analog-to-digital (A/D) converter configured to convert the input speech from an analog form to a digital form and generate a digitized speech; a memory storing a keyword recognition application; a hardware processor executing the keyword recognition application to:
receive the digitized speech from the A/D converter;
divide the digitized speech into a plurality of speech segments having a first speech segment;
calculate a first probability of distribution of a first keyword in the first speech segment;
determine that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword;
calculate a second probability of distribution of a second keyword in the first speech segment; and
determine that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword. 2. The system of claim 1, wherein the first keyword at least partially overlaps the second keyword in the first speech segment. 3. The system of claim 1, wherein at least one of the first threshold and the second threshold is calibrated for high precision detection of the first keyword. 4. The system of claim 1, wherein at least one of the first threshold and the second threshold is calibrated for high recall detection of the first keyword. 5. The system of claim 1, wherein the plurality of speech segments include sliding window segments. 6. The system of claim 1, wherein, after determining the first speech segment includes the first keyword, the hardware processor is further configured to execute a first action associated with the first keyword. 7. The system of claim 1, wherein, after determining the first speech segment includes the second keyword, the hardware processor is further configured to execute a second action associated with the second keyword. 8. The system of claim 1, wherein at least one of the first keyword and the second keyword is a command for a game. 9. The system of claim 1, wherein the input speech includes speech from a first user and speech from a second user. 10. The system of claim 9, wherein the first user speaks the first keyword and the second user speaks the second keyword. 11. A method of keyword recognition, for use with a system having a microphone, an analog-to-digital (A/D) converter, a memory including a keyword recognition application, and a hardware processor, the method comprising:
receiving, using the hardware processor, a digitized speech from the A/D converter; dividing, using the hardware processor, the digitized speech into a plurality of speech segments having a first speech segment; calculating, using the hardware processor, a first probability of distribution of a first keyword in the first speech segment; determining, using the hardware processor, that a first fraction of the first speech segment includes the first keyword, in response to comparing the first probability of distribution with a first threshold associated with the first keyword; calculating, using the hardware processor, a second probability of distribution of a second keyword in the first speech segment; and determining, using the hardware processor, that a second fraction of the first speech segment includes the second keyword, in response to comparing the second probability of distribution with a second threshold associated with the second keyword. 12. The method of claim 11, wherein the first keyword at least partially overlaps the second keyword in the first speech segment. 13. The method of claim 11, wherein the first threshold is calibrated for high precision detection of the first keyword. 14. The method of claim 11, wherein the first threshold is calibrated for high recall detection of the first keyword. 15. The method of claim 11, wherein the plurality of speech segments include sliding window segments. 16. The method of claim 11, further comprising:
executing, using the processor, a first action associated with the first keyword if the first keyword is recognized. 17. The method of claim 11, further comprising:
executing, using the processor, a second action associated with the second keyword if the second keyword is recognized. 18. The method of claim 11, wherein the at least one of the first keyword and the second keyword is a command for a game. 19. The method of claim 11, wherein the input speech includes speech from a first user and speech from a second user, and wherein the first user speaks the first keyword and the second user speaks the second keyword. 20. A system for keyword recognition, the system comprising:
a microphone configured to receive an input speech; an analog-to-digital (A/D) converter configured to convert the input speech from an analog form to a digital form and generate a digitized speech; a memory storing a keyword recognition application; a hardware processor executing the keyword recognition application to:
receive the digitized speech from the A/D converter;
divide the digitized speech into a plurality of speech segments, including a first speech segment including a plurality of keywords and background, wherein background includes portions of the first speech segment that do not contain keywords;
convert the first speech segment to a feature vector sequence;
model a plurality of keyword probability distributions from the feature vector sequence, wherein each keyword probability distribution of the plurality of keyword probability distributions corresponds to a keyword of the plurality of keywords;
model a background probability distribution from the feature vector sequence;
model the first speech segment as a combination of a plurality of keyword vectors and a plurality of background vectors;
model a speech segment probability distribution as a mixture of the plurality of keyword probability distributions and the background probability distribution;
estimate a plurality of keyword mixture weights corresponding to the plurality of keyword probability distributions and a background mixture weight corresponding to the background probability distribution using an any maximum-likelihood technique;
equate each keyword mixture weight of the plurality of keyword mixture weights to a corresponding plurality of probabilities of each keyword of the plurality of keywords and to a corresponding plurality of fractions of the first speech segment that contain each keyword of the plurality of keywords. | 2,600 |
9,717 | 9,717 | 13,932,190 | 2,657 | A method for providing cross-language automatic speech recognition is provided. The method includes choosing a preferred first language for a speech recognition system. The speech recognition system supports multiple languages. A search operation is initiated using the speech recognition system. A user is prompted to continue the search operation in the first language or a second language. In response to the user selection of continuing in the second language, searching is provided in the second language and interaction is provided with the user in the first language during the search operation. | 1. A method for providing cross-language automatic speech recognition, the method comprising:
choosing a preferred first language for a speech recognition system, the speech recognition system supporting multiple languages; initiating a search operation using the speech recognition system; prompting a user to continue the search operation in the first language or a second language; and in response to the user selection of continuing in the second language, providing searching in the second language and providing interaction with the user in the first language during the search operation. 2. The method of claim 1, wherein the first language comprises French and the second language comprises English. 3. The method of claim 1 further comprising, in response to the user selection of continuing in the first language, providing searching and speech interaction with the user in the first language. 4. The method of claim 1 further comprising displaying search results in the second language. 5. The method of claim 1 further comprising searching for an address using the speech recognition system. 6. The method of claim 5, wherein the address is in Quebec, Canada. 7. The method of claim 1, wherein the speech recognition system is in a vehicle. 8. The method of claim 1 further comprising using phonetic data to recognize speech in the first and second languages. 9. An automatic speech recognition system that provides cross-language automatic speech recognition, the automatic speech recognition system comprising:
a computing device comprising one or more processors and one or more memory components, the computing device including speech and language logic that
in response to a user initiating a search operation, prompts the user to continue the search operation in a first language or a second language; and
in response to the user selection of continuing in the second language, provides searching in the second language and provides interaction with the user in the first language during the search operation. 10. The system of claim 9, wherein the first language comprises French and the second language comprises English. 11. The system of claim 9, wherein the speech and language logic, in response to the user selection of continuing in the first language, provides searching and speech interaction with the user in the first language. 12. The system of claim 9 further comprising a display, the computing device displaying search results on the display in the second language. 13. The system of claim 9, wherein the speech and language logic uses phonetic data to recognize speech in the first and second languages. 14. A method for providing cross-language automatic speech recognition, the method comprising:
initiating an address search operation using a speech recognition system, the speech recognition system having a preferred first language and supporting at least one other language; prompting a user to continue the address search operation in the first language or the at least one other language after the address search is initiated; and in response to the user selection of continuing in the at least one other language, providing searching in the at least one other language and providing interaction with the user in the first language. 15. The method of claim 14 further comprising searching in a language-specific inventory. 16. The method of claim 14, wherein the first language comprises French and the at least one other language comprises English. 17. The method of claim 14 further comprising, in response to the user selection of continuing in the first language, providing searching and speech interaction with the user in the first language. 18. The method of claim 14 further comprising the speech recognition system determining if a geographic region input by the user supports at least one non-traditional address format. 19. The method of claim 14, wherein the speech recognition system is in a vehicle. 20. The method of claim 14 further comprising using phonetic data to recognize speech in the first and at least one other language. | A method for providing cross-language automatic speech recognition is provided. The method includes choosing a preferred first language for a speech recognition system. The speech recognition system supports multiple languages. A search operation is initiated using the speech recognition system. A user is prompted to continue the search operation in the first language or a second language. In response to the user selection of continuing in the second language, searching is provided in the second language and interaction is provided with the user in the first language during the search operation.1. A method for providing cross-language automatic speech recognition, the method comprising:
choosing a preferred first language for a speech recognition system, the speech recognition system supporting multiple languages; initiating a search operation using the speech recognition system; prompting a user to continue the search operation in the first language or a second language; and in response to the user selection of continuing in the second language, providing searching in the second language and providing interaction with the user in the first language during the search operation. 2. The method of claim 1, wherein the first language comprises French and the second language comprises English. 3. The method of claim 1 further comprising, in response to the user selection of continuing in the first language, providing searching and speech interaction with the user in the first language. 4. The method of claim 1 further comprising displaying search results in the second language. 5. The method of claim 1 further comprising searching for an address using the speech recognition system. 6. The method of claim 5, wherein the address is in Quebec, Canada. 7. The method of claim 1, wherein the speech recognition system is in a vehicle. 8. The method of claim 1 further comprising using phonetic data to recognize speech in the first and second languages. 9. An automatic speech recognition system that provides cross-language automatic speech recognition, the automatic speech recognition system comprising:
a computing device comprising one or more processors and one or more memory components, the computing device including speech and language logic that
in response to a user initiating a search operation, prompts the user to continue the search operation in a first language or a second language; and
in response to the user selection of continuing in the second language, provides searching in the second language and provides interaction with the user in the first language during the search operation. 10. The system of claim 9, wherein the first language comprises French and the second language comprises English. 11. The system of claim 9, wherein the speech and language logic, in response to the user selection of continuing in the first language, provides searching and speech interaction with the user in the first language. 12. The system of claim 9 further comprising a display, the computing device displaying search results on the display in the second language. 13. The system of claim 9, wherein the speech and language logic uses phonetic data to recognize speech in the first and second languages. 14. A method for providing cross-language automatic speech recognition, the method comprising:
initiating an address search operation using a speech recognition system, the speech recognition system having a preferred first language and supporting at least one other language; prompting a user to continue the address search operation in the first language or the at least one other language after the address search is initiated; and in response to the user selection of continuing in the at least one other language, providing searching in the at least one other language and providing interaction with the user in the first language. 15. The method of claim 14 further comprising searching in a language-specific inventory. 16. The method of claim 14, wherein the first language comprises French and the at least one other language comprises English. 17. The method of claim 14 further comprising, in response to the user selection of continuing in the first language, providing searching and speech interaction with the user in the first language. 18. The method of claim 14 further comprising the speech recognition system determining if a geographic region input by the user supports at least one non-traditional address format. 19. The method of claim 14, wherein the speech recognition system is in a vehicle. 20. The method of claim 14 further comprising using phonetic data to recognize speech in the first and at least one other language. | 2,600 |
9,718 | 9,718 | 13,965,861 | 2,625 | An embodiment provides a method, including: ascertaining user input to a display screen forming a predetermined shape associated with system-wide note taking; determining, using one or more processors, user input note data associated with the predetermined shape; and providing, in a predetermined location, a note including the user input note data. Other aspects are described and claimed. | 1. A method, comprising:
ascertaining user input to a display screen forming a predetermined shape associated with system-wide note taking; determining, using one or more processors, user input note data associated with the predetermined shape; and providing, in a predetermined location, a note including the user input note data. 2. The method of claim 1, wherein the determining comprises determining user input note data bounded by the predetermined shape. 3. The method of claim 2, wherein the user input note data bounded by the predetermined shape comprises note data input by a user after the ascertaining the predetermined shape. 4. The method of claim 1, wherein the user input note data remains in an application rendered in the display screen after the user input note data is provided in the predetermined location. 5. The method of claim 1, further comprising determining additional user input note data associated with the predetermined shape; and updating the note based on the additional user input note data. 6. The method of claim 1, wherein the user input note data is removed from an application rendered in the display screen. 7. The method of claim 6, wherein the user input note data is removed from the application rendered in the display screen by a fading animation. 8. The method of claim 1, wherein the predetermined location is accessible from a desktop view of an information handling device. 9. The method of claim 8, wherein the user input note data provided in the predetermined location comprises user input note data bounded by the predetermined shape selected from the group consisting of a screen capture; handwritten user input, and machine readable text input. 10. The method of claim 1, wherein the predetermined shape is ascertainable within any application such that the predetermined shape is associated with system-wide note taking in an application-independent manner. 11. An information handling device, comprising:
a display screen; one or more processors; a memory storing instructions accessible to the one or more processors, the instructions being executable by the one or more processors to: ascertain user input to the display screen forming a predetermined shape associated with system-wide note taking; determine user input note data associated with the predetermined shape; and provide, in a predetermined location, a note including the user input note data. 12. The information handling device of claim 11, wherein to determine comprises determining user input note data bounded by the predetermined shape. 13. The information handling device of claim 12, wherein the user input note data bounded by the predetermined shape comprises note data input by a user after the ascertaining the predetermined shape. 14. The information handling device of claim 11, wherein the user input note data remains in an application rendered in the display screen after the user input note data is provided in the predetermined location. 15. The information handling device of claim 11, wherein the instructions are further executable by the one or more processors to determine additional user input note data associated with the predetermined shape; and update the note based on the additional user input note data. 16. The information handling device of claim 11, wherein the user input note data is removed from an application rendered in the display screen. 17. The information handling device of claim 16, wherein the user input note data is removed from the application rendered in the display screen by a fading animation. 18. The information handling device of claim 11, wherein the predetermined location is accessible from a desktop view of the information handling device. 19. The method of claim 11, wherein the predetermined shape is ascertainable within any application such that the predetermined shape is associated with system-wide note taking in an application-independent manner 20. A program product, comprising:
a storage medium having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to ascertain user input to a display screen forming a predetermined shape associated with system-wide note taking; computer readable program code configured to determine, using one or more processors, user input note data associated with the predetermined shape; and computer readable program code configured to provide, in a predetermined location, a note including the user input note data. | An embodiment provides a method, including: ascertaining user input to a display screen forming a predetermined shape associated with system-wide note taking; determining, using one or more processors, user input note data associated with the predetermined shape; and providing, in a predetermined location, a note including the user input note data. Other aspects are described and claimed.1. A method, comprising:
ascertaining user input to a display screen forming a predetermined shape associated with system-wide note taking; determining, using one or more processors, user input note data associated with the predetermined shape; and providing, in a predetermined location, a note including the user input note data. 2. The method of claim 1, wherein the determining comprises determining user input note data bounded by the predetermined shape. 3. The method of claim 2, wherein the user input note data bounded by the predetermined shape comprises note data input by a user after the ascertaining the predetermined shape. 4. The method of claim 1, wherein the user input note data remains in an application rendered in the display screen after the user input note data is provided in the predetermined location. 5. The method of claim 1, further comprising determining additional user input note data associated with the predetermined shape; and updating the note based on the additional user input note data. 6. The method of claim 1, wherein the user input note data is removed from an application rendered in the display screen. 7. The method of claim 6, wherein the user input note data is removed from the application rendered in the display screen by a fading animation. 8. The method of claim 1, wherein the predetermined location is accessible from a desktop view of an information handling device. 9. The method of claim 8, wherein the user input note data provided in the predetermined location comprises user input note data bounded by the predetermined shape selected from the group consisting of a screen capture; handwritten user input, and machine readable text input. 10. The method of claim 1, wherein the predetermined shape is ascertainable within any application such that the predetermined shape is associated with system-wide note taking in an application-independent manner. 11. An information handling device, comprising:
a display screen; one or more processors; a memory storing instructions accessible to the one or more processors, the instructions being executable by the one or more processors to: ascertain user input to the display screen forming a predetermined shape associated with system-wide note taking; determine user input note data associated with the predetermined shape; and provide, in a predetermined location, a note including the user input note data. 12. The information handling device of claim 11, wherein to determine comprises determining user input note data bounded by the predetermined shape. 13. The information handling device of claim 12, wherein the user input note data bounded by the predetermined shape comprises note data input by a user after the ascertaining the predetermined shape. 14. The information handling device of claim 11, wherein the user input note data remains in an application rendered in the display screen after the user input note data is provided in the predetermined location. 15. The information handling device of claim 11, wherein the instructions are further executable by the one or more processors to determine additional user input note data associated with the predetermined shape; and update the note based on the additional user input note data. 16. The information handling device of claim 11, wherein the user input note data is removed from an application rendered in the display screen. 17. The information handling device of claim 16, wherein the user input note data is removed from the application rendered in the display screen by a fading animation. 18. The information handling device of claim 11, wherein the predetermined location is accessible from a desktop view of the information handling device. 19. The method of claim 11, wherein the predetermined shape is ascertainable within any application such that the predetermined shape is associated with system-wide note taking in an application-independent manner 20. A program product, comprising:
a storage medium having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to ascertain user input to a display screen forming a predetermined shape associated with system-wide note taking; computer readable program code configured to determine, using one or more processors, user input note data associated with the predetermined shape; and computer readable program code configured to provide, in a predetermined location, a note including the user input note data. | 2,600 |
9,719 | 9,719 | 15,345,678 | 2,667 | Some embodiments are provided for providing a wireless bridge to local devices on personal equipment systems. Personal equipment systems can include wireless communication systems that allow external systems, users, or both to communicate with and access data from local devices on the personal equipment systems. Personal equipment systems are provided having one or more local devices coupled thereto and in wired communication with one another. Personal equipment systems can include a wireless system and local devices attached to a headgear system, the local devices being in wired communication with one another and in wireless communication with external systems. The wireless communication system is configured to establish a wireless connection with external systems for communicating with and accessing local devices that are in wired communication with each other. | 1. A personal equipment system arrangement comprising:
a long-range radio electrically coupled to a personal equipment system and configured to communicate voice and data over distances greater than 100 m; a low-power, short-range wireless communication system electrically coupled to the personal equipment system; and a higher-power, short-range ultra wideband wireless system electrically coupled to the personal equipment system, wherein the personal equipment system receives information wirelessly from an external system through the higher-power, short-range ultra-wideband wireless system for information transmitted at a rate greater than or equal to about 500 kbps and through the low-power, short-range wireless communication system for information transmitted at a rate less than 500 kbps. 2. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system consumes less than or equal to about 500 mW of power. 3. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system is configured to communicate with the external system only when the external system is located less than or equal to about 5 m from the personal equipment system. 4. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system receives sensor data from the external system. 5. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system includes an antenna configured to send and receive electromagnetic signals, a transceiver electrically coupled to the antenna, a primary processor electrically coupled to the transceiver, and a secondary processor electrically coupled to a cabled intrapersonal network, the primary and secondary processors being communicably coupled, the primary and secondary processors configured to encode information received by the antenna for delivery to one or more local devices coupled to the cabled intrapersonal network. 6. The personal equipment system arrangement of claim 5, wherein the cabled intrapersonal network comprises an optical digital signal link. 7. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system is configured to create the wireless bridge between the external system and the personal equipment system only when the external system is less than or equal to about 5 m from the personal equipment system. 8. The personal equipment system arrangement of claim 7, wherein the external system comprises one of a computer, phone, radio, laptop, or smartphone. 9. The personal equipment system arrangement of claim 7, wherein the external system communicates sensor data associated with the external system to the personal equipment system through the wireless bridge. 10. The personal equipment system arrangement of claim 9, wherein sensor data is at least one of an accelerometer data, MEMs data, image sensor data, and GPS data. 11. The personal equipment system arrangement of claim 1, further comprising:
a primary processor electrically coupled to a transceiver and to an encoding engine configured to encode signals for transmission to the external system and to decode signals received from the external system; and a secondary processor electrically coupled to the encoding engine, wherein the secondary processor is configured to transmit the signals from the personal equipment system to the encoding engine for encoding and to transmit signals from the encoding engine over a cabled intrapersonal communication network. 12. The personal equipment system arrangement of claim 11, further comprising:
a controller electrically coupled to the primary and secondary processors, wherein the controller is configured to control the operation of the wireless communication system. 13. The personal equipment system arrangement of claim 1, wherein the wireless communication system is configured to communicate wirelessly with an external system that is worn on a body of a user or carried by a user. | Some embodiments are provided for providing a wireless bridge to local devices on personal equipment systems. Personal equipment systems can include wireless communication systems that allow external systems, users, or both to communicate with and access data from local devices on the personal equipment systems. Personal equipment systems are provided having one or more local devices coupled thereto and in wired communication with one another. Personal equipment systems can include a wireless system and local devices attached to a headgear system, the local devices being in wired communication with one another and in wireless communication with external systems. The wireless communication system is configured to establish a wireless connection with external systems for communicating with and accessing local devices that are in wired communication with each other.1. A personal equipment system arrangement comprising:
a long-range radio electrically coupled to a personal equipment system and configured to communicate voice and data over distances greater than 100 m; a low-power, short-range wireless communication system electrically coupled to the personal equipment system; and a higher-power, short-range ultra wideband wireless system electrically coupled to the personal equipment system, wherein the personal equipment system receives information wirelessly from an external system through the higher-power, short-range ultra-wideband wireless system for information transmitted at a rate greater than or equal to about 500 kbps and through the low-power, short-range wireless communication system for information transmitted at a rate less than 500 kbps. 2. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system consumes less than or equal to about 500 mW of power. 3. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system is configured to communicate with the external system only when the external system is located less than or equal to about 5 m from the personal equipment system. 4. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system receives sensor data from the external system. 5. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system includes an antenna configured to send and receive electromagnetic signals, a transceiver electrically coupled to the antenna, a primary processor electrically coupled to the transceiver, and a secondary processor electrically coupled to a cabled intrapersonal network, the primary and secondary processors being communicably coupled, the primary and secondary processors configured to encode information received by the antenna for delivery to one or more local devices coupled to the cabled intrapersonal network. 6. The personal equipment system arrangement of claim 5, wherein the cabled intrapersonal network comprises an optical digital signal link. 7. The personal equipment system arrangement of claim 1, wherein the low-power, short-range wireless communication system is configured to create the wireless bridge between the external system and the personal equipment system only when the external system is less than or equal to about 5 m from the personal equipment system. 8. The personal equipment system arrangement of claim 7, wherein the external system comprises one of a computer, phone, radio, laptop, or smartphone. 9. The personal equipment system arrangement of claim 7, wherein the external system communicates sensor data associated with the external system to the personal equipment system through the wireless bridge. 10. The personal equipment system arrangement of claim 9, wherein sensor data is at least one of an accelerometer data, MEMs data, image sensor data, and GPS data. 11. The personal equipment system arrangement of claim 1, further comprising:
a primary processor electrically coupled to a transceiver and to an encoding engine configured to encode signals for transmission to the external system and to decode signals received from the external system; and a secondary processor electrically coupled to the encoding engine, wherein the secondary processor is configured to transmit the signals from the personal equipment system to the encoding engine for encoding and to transmit signals from the encoding engine over a cabled intrapersonal communication network. 12. The personal equipment system arrangement of claim 11, further comprising:
a controller electrically coupled to the primary and secondary processors, wherein the controller is configured to control the operation of the wireless communication system. 13. The personal equipment system arrangement of claim 1, wherein the wireless communication system is configured to communicate wirelessly with an external system that is worn on a body of a user or carried by a user. | 2,600 |
9,720 | 9,720 | 14,276,260 | 2,626 | An embodiment provides a method, including: communicating data from a smart pen to a device using a pen input data channel having a first bandwidth; said data including short range wireless connection data; establishing, using said data, a short range wireless connection between the smart pen and the device; and communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection. Other aspects are described and claimed. | 1. A method, comprising:
communicating data from a smart pen to a device using a pen input data channel having a first bandwidth; said data including short range wireless connection data; establishing, using said data, a short range wireless connection between the smart pen and the device; and communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection. 2. The method of claim 1, wherein the higher bandwidth data includes data selected from the group consisting of a media file, sensor data, and pen button data. 3. The method of claim 1, wherein said pen input channel is used actively by said smart pen to communicate pen input location data to an input component of the device. 4. The method of claim 3, wherein said input component is selected from the group consisting of a touch screen, a digitizer, and a combination thereof. 5. The method of claim 1, further comprising communicating data from the smart pen to a second device using the pen input data channel, said data including short range wireless connecting data. 6. The method of claim 5, further comprising:
establishing a short range wireless connection between the smart pen and the second device. 7. The method of claim 6, further comprising automatically switching a connection between the smart pen and the device and the second device based on a factor selected from the group consisting of proximity and pen input data channel communication. 8. The method of claim 1, wherein the communicating data from a smart pen to a device using a pen input data channel comprises communicating short range wireless first time pairing data. 9. The method of claim 1, wherein communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection comprises communicating the higher bandwidth data from the smart pen to the device. 10. The method of claim 1, wherein communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection comprises communicating the higher bandwidth data to the smart pen from the device. 11. A device, comprising:
an input component accepting smart pen input; a display; a processor operatively coupled to the input component and the display; and a memory storing instructions that are executable by the processor to: receive data from a smart pen at the device using a pen input data channel having a first bandwidth; said data including short range wireless connection data; establish, using said data, a short range wireless connection with the smart pen; and communicate, in a connected condition with the smart pen, higher bandwidth data using the short range wireless connection. 12. The device of claim 11, wherein the higher bandwidth data includes data selected from the group consisting of a media file, sensor data, and pen button data. 13. The device of claim 11, wherein said pen input data channel is used actively by said smart pen to communicate pen input location data to the input component of the device. 14. The device of claim 11, wherein said input component is selected from the group consisting of a touch screen, a digitizer and a combination thereof. 15. The device of claim 11, wherein the instructions are further executable by the processor to receive data from another smart pen at the device using a pen input data channel, said data including short range wireless connecting data. 16. The device of claim 15, wherein the instructions are further executable by the processor to:
establish a short range wireless connection between the another smart pen and the device. 17. The device of claim 16, wherein the instructions are further executable by the processor to automatically switch a connection between a smart pen and the device based on a factor selected from the group consisting of proximity and pen input data channel communication. 18. The device of claim 1, wherein the data received from the pen input data channel comprises short range wireless first time pairing data. 19. The device of claim 11, wherein to communicate, in a connected condition, higher bandwidth data using the short range wireless connection comprises communicating the higher bandwidth data from the device to the smart pen. 20. A smart pen, comprising:
a tip; and a body; the body includes a processor and a memory storing instructions that are executable by the processor to: transmit data to a device using a pen input data channel; said data including short range wireless connection data; establish, using said data, a short range wireless connection with the device; and communicate, in a connected condition with the device, higher bandwidth data using the short range wireless connection. | An embodiment provides a method, including: communicating data from a smart pen to a device using a pen input data channel having a first bandwidth; said data including short range wireless connection data; establishing, using said data, a short range wireless connection between the smart pen and the device; and communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection. Other aspects are described and claimed.1. A method, comprising:
communicating data from a smart pen to a device using a pen input data channel having a first bandwidth; said data including short range wireless connection data; establishing, using said data, a short range wireless connection between the smart pen and the device; and communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection. 2. The method of claim 1, wherein the higher bandwidth data includes data selected from the group consisting of a media file, sensor data, and pen button data. 3. The method of claim 1, wherein said pen input channel is used actively by said smart pen to communicate pen input location data to an input component of the device. 4. The method of claim 3, wherein said input component is selected from the group consisting of a touch screen, a digitizer, and a combination thereof. 5. The method of claim 1, further comprising communicating data from the smart pen to a second device using the pen input data channel, said data including short range wireless connecting data. 6. The method of claim 5, further comprising:
establishing a short range wireless connection between the smart pen and the second device. 7. The method of claim 6, further comprising automatically switching a connection between the smart pen and the device and the second device based on a factor selected from the group consisting of proximity and pen input data channel communication. 8. The method of claim 1, wherein the communicating data from a smart pen to a device using a pen input data channel comprises communicating short range wireless first time pairing data. 9. The method of claim 1, wherein communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection comprises communicating the higher bandwidth data from the smart pen to the device. 10. The method of claim 1, wherein communicating, in a connected condition, higher bandwidth data between the smart pen and the device using the short range wireless connection comprises communicating the higher bandwidth data to the smart pen from the device. 11. A device, comprising:
an input component accepting smart pen input; a display; a processor operatively coupled to the input component and the display; and a memory storing instructions that are executable by the processor to: receive data from a smart pen at the device using a pen input data channel having a first bandwidth; said data including short range wireless connection data; establish, using said data, a short range wireless connection with the smart pen; and communicate, in a connected condition with the smart pen, higher bandwidth data using the short range wireless connection. 12. The device of claim 11, wherein the higher bandwidth data includes data selected from the group consisting of a media file, sensor data, and pen button data. 13. The device of claim 11, wherein said pen input data channel is used actively by said smart pen to communicate pen input location data to the input component of the device. 14. The device of claim 11, wherein said input component is selected from the group consisting of a touch screen, a digitizer and a combination thereof. 15. The device of claim 11, wherein the instructions are further executable by the processor to receive data from another smart pen at the device using a pen input data channel, said data including short range wireless connecting data. 16. The device of claim 15, wherein the instructions are further executable by the processor to:
establish a short range wireless connection between the another smart pen and the device. 17. The device of claim 16, wherein the instructions are further executable by the processor to automatically switch a connection between a smart pen and the device based on a factor selected from the group consisting of proximity and pen input data channel communication. 18. The device of claim 1, wherein the data received from the pen input data channel comprises short range wireless first time pairing data. 19. The device of claim 11, wherein to communicate, in a connected condition, higher bandwidth data using the short range wireless connection comprises communicating the higher bandwidth data from the device to the smart pen. 20. A smart pen, comprising:
a tip; and a body; the body includes a processor and a memory storing instructions that are executable by the processor to: transmit data to a device using a pen input data channel; said data including short range wireless connection data; establish, using said data, a short range wireless connection with the device; and communicate, in a connected condition with the device, higher bandwidth data using the short range wireless connection. | 2,600 |
9,721 | 9,721 | 14,814,566 | 2,683 | Eyewear includes a frame and one or more stems extending distally from the frame. One or more processors are disposed within one or more of the frame or the stems, and one or more proximity sensor components are disposed within the stems defining thermal reception beams oriented in a rearward facing direction. Each proximity sensor component can include an infrared signal receiver to receive an infrared emission from an object. The one or more processors can execute a control operation when the proximity sensor components receive the infrared emission from the object. | 1. Eyewear, comprising:
a frame; one or more stems extending distally from the frame; one or more processors disposed within one or more of the frame or the one or more stems; one or more proximity sensor components disposed within the one or more stems and operable with the one or more processors, each proximity sensor component comprising an infrared signal receiver to receive an infrared emission from an object, the one or more stems each comprising a temple portion and an ear engagement portion, the one or more proximity sensor components disposed along the ear engagement portion; the one or more processors operable to execute a control operation when the one or more proximity sensor components receive the infrared emission from the object. 2. (canceled) 3. The eyewear of claim 1, the one or more stems extending distally from the frame in a rearward direction when in an open position, the infrared signal receiver to receive the infrared emission from the object along the rearward direction. 4. The eyewear of claim 1, the one or more stems comprising a first stem and a second stem, the one or more proximity sensor components comprising a first proximity sensor component disposed along the first stem and a second proximity sensor component disposed along the second stem. 5. The eyewear of claim 4, the first proximity sensor component defining at least a first reception beam oriented at least partially in a first direction, and the second proximity sensor component defining at least a second reception beam oriented at least partially in a second direction, the second direction different from the first direction. 6. The eyewear of claim 5, further comprising an audio output device operable with the one or more processors, the control operation comprising causing the audio output device to emit audible sound when the infrared signal receiver receives the infrared emission from the object. 7. The eyewear of claim 6, the audible sound a function of a distance of the object from the infrared signal receiver. 8. The eyewear of claim 5, further comprising a haptic device operable with the one or more processors, the control operation comprising causing the haptic device to deliver a tactile output when the infrared signal receiver receives the infrared emission from the object. 9. The eyewear of claim 8, the tactile output a function of a distance of the object from the infrared signal receiver. 10. The eyewear of claim 5, further comprising a wireless communication device operable with the one or more processors, the control operation comprising causing the wireless communication device to transmit a notification to an external electronic device when the infrared signal receiver receives the infrared emission from the object. 11. The eyewear of claim 10, the notification comprising an indication of whether the infrared emission was received from the first reception beam or the second reception beam. 12. The eyewear of claim 10, the eyewear comprising eyeglasses. 13. The eyewear of claim 1, further comprising:
an energy storage device, operable with the one or more processors; and a photovoltaic device to charge the energy storage device; the photovoltaic device disposed along the temple portion. 14. The eyewear of claim 1, further comprising one or more of an audio capture device or a video capture device, the control operation comprising actuating the one or more of the audio capture device or the video capture device. 15. (canceled) 16. (canceled) 17. (canceled) 18. (canceled) 19. (canceled) 20. (canceled) | Eyewear includes a frame and one or more stems extending distally from the frame. One or more processors are disposed within one or more of the frame or the stems, and one or more proximity sensor components are disposed within the stems defining thermal reception beams oriented in a rearward facing direction. Each proximity sensor component can include an infrared signal receiver to receive an infrared emission from an object. The one or more processors can execute a control operation when the proximity sensor components receive the infrared emission from the object.1. Eyewear, comprising:
a frame; one or more stems extending distally from the frame; one or more processors disposed within one or more of the frame or the one or more stems; one or more proximity sensor components disposed within the one or more stems and operable with the one or more processors, each proximity sensor component comprising an infrared signal receiver to receive an infrared emission from an object, the one or more stems each comprising a temple portion and an ear engagement portion, the one or more proximity sensor components disposed along the ear engagement portion; the one or more processors operable to execute a control operation when the one or more proximity sensor components receive the infrared emission from the object. 2. (canceled) 3. The eyewear of claim 1, the one or more stems extending distally from the frame in a rearward direction when in an open position, the infrared signal receiver to receive the infrared emission from the object along the rearward direction. 4. The eyewear of claim 1, the one or more stems comprising a first stem and a second stem, the one or more proximity sensor components comprising a first proximity sensor component disposed along the first stem and a second proximity sensor component disposed along the second stem. 5. The eyewear of claim 4, the first proximity sensor component defining at least a first reception beam oriented at least partially in a first direction, and the second proximity sensor component defining at least a second reception beam oriented at least partially in a second direction, the second direction different from the first direction. 6. The eyewear of claim 5, further comprising an audio output device operable with the one or more processors, the control operation comprising causing the audio output device to emit audible sound when the infrared signal receiver receives the infrared emission from the object. 7. The eyewear of claim 6, the audible sound a function of a distance of the object from the infrared signal receiver. 8. The eyewear of claim 5, further comprising a haptic device operable with the one or more processors, the control operation comprising causing the haptic device to deliver a tactile output when the infrared signal receiver receives the infrared emission from the object. 9. The eyewear of claim 8, the tactile output a function of a distance of the object from the infrared signal receiver. 10. The eyewear of claim 5, further comprising a wireless communication device operable with the one or more processors, the control operation comprising causing the wireless communication device to transmit a notification to an external electronic device when the infrared signal receiver receives the infrared emission from the object. 11. The eyewear of claim 10, the notification comprising an indication of whether the infrared emission was received from the first reception beam or the second reception beam. 12. The eyewear of claim 10, the eyewear comprising eyeglasses. 13. The eyewear of claim 1, further comprising:
an energy storage device, operable with the one or more processors; and a photovoltaic device to charge the energy storage device; the photovoltaic device disposed along the temple portion. 14. The eyewear of claim 1, further comprising one or more of an audio capture device or a video capture device, the control operation comprising actuating the one or more of the audio capture device or the video capture device. 15. (canceled) 16. (canceled) 17. (canceled) 18. (canceled) 19. (canceled) 20. (canceled) | 2,600 |
9,722 | 9,722 | 15,504,045 | 2,674 | Examples of activating cloud services for a printing device are disclosed. In one example implementation according to aspects of the present disclosure, a printing device activation process to activate a printing device is performed concurrently with a cloud credentials process to receive a cloud authentication token. A cloud services activation process then activates a cloud service for the printing device. | 1. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to:
concurrently perform a printing device activation process to activate a printing device and a cloud credentials process to receive a cloud authentication token; and initiate a cloud services activation process to activate a cloud service for the printing device. 2. The non-transitory computer-readable storage medium of claim 1, wherein the printing device activation process comprises:
gathering information about the printing device from the printing device, and enabling cloud services on the printing device. 3. The non-transitory computer-readable storage medium of claim 2, wherein information about the printing device includes a printing device identifier. 4. The non-transitory computer-readable storage medium of claim 1, wherein the cloud credentials process comprises:
transmitting a request to a gateway server to request the cloud authentication token, wherein the request includes a user authentication credential, and receiving the cloud authentication token responsive to the transmitted request. 5. The non-transitory computer-readable storage medium of claim 1, wherein initiating the cloud services activation process further comprises transmitting a cloud services activation request to a gateway server. 6. The non-transitory computer-readable storage medium of claim 5, wherein the cloud services activation request includes a plurality of bundled cloud services activation requests to be unpackaged by the gateway server and to be relayed by the gateway server to a plurality of cloud services servers. 7. The non-transitory computer-readable storage medium of claim 1, wherein initiating the cloud services activation process further comprises activating additional cloud services. 8. The non-transitory computer-readable storage medium of claim 1, further storing instructions that, when executed by the processor, cause the processor to:
cause the printing device to connect to a network prior to concurrently performing the printing device activation process and the cloud credentials process. 9. A method comprising:
causing, by a computing system, a printing device to connect to a network: concurrently performing, by the computing system, a printing device activation process to activate the printing device and a cloud credentials process to receive a cloud authentication token from a gateway server; and initiating, by the computing system, a cloud services activation process to activate a cloud service for the printing device by transmitting a cloud services activation request to the gateway server. 10. The method of claim 9, further comprising:
receiving, by the computing system, an acknowledgement of successful cloud services activation from the gateway server. 11. The method of claim 9, wherein the cloud services activation request includes a plurality of bundled cloud services activation requests to be unpackaged by the gateway server and to be relayed by the gateway server to a plurality of cloud services servers. 12. The method of claim 9,
wherein the printing device activation process comprises gathering information about the printing device from the printing device, and enabling cloud services on the printing device; and wherein the cloud credentials process comprises transmitting a request to the gateway server to request the cloud authentication token, wherein the request includes a user authentication credential, and receiving the cloud authentication token responsive to the transmitted request. 13. The method of claim 9, wherein causing the printing device to connect to the network occurs prior to concurrently performing the printing device activation process and the cloud credentials process. 14. A computing system comprising:
a network connection setup module executable by a processing resource to cause a printing device to connect to a network; a printing device activation module executable by the processing resource to perform a printing device activation process, the printing device activation process comprising gathering information about the printing device from the printing device and enabling cloud services on the printing device; a cloud credentials module executable by the processing resource to perform a cloud credentials process, the cloud credentials process comprising transmitting a request to a gateway server to request a cloud authentication token and receiving the cloud authentication token responsive to the transmitted request; and a cloud services activation module executable by the processing resource to activate a cloud service for the printing device by transmitting a cloud services activation request to the gateway server, wherein the printing device activation process and the cloud credentials process are performed concurrently. 15. The computing system of claim 13, wherein causing the printing device to connect to the network occurs prior to concurrently performing the printing device activation process and the cloud credentials process. | Examples of activating cloud services for a printing device are disclosed. In one example implementation according to aspects of the present disclosure, a printing device activation process to activate a printing device is performed concurrently with a cloud credentials process to receive a cloud authentication token. A cloud services activation process then activates a cloud service for the printing device.1. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to:
concurrently perform a printing device activation process to activate a printing device and a cloud credentials process to receive a cloud authentication token; and initiate a cloud services activation process to activate a cloud service for the printing device. 2. The non-transitory computer-readable storage medium of claim 1, wherein the printing device activation process comprises:
gathering information about the printing device from the printing device, and enabling cloud services on the printing device. 3. The non-transitory computer-readable storage medium of claim 2, wherein information about the printing device includes a printing device identifier. 4. The non-transitory computer-readable storage medium of claim 1, wherein the cloud credentials process comprises:
transmitting a request to a gateway server to request the cloud authentication token, wherein the request includes a user authentication credential, and receiving the cloud authentication token responsive to the transmitted request. 5. The non-transitory computer-readable storage medium of claim 1, wherein initiating the cloud services activation process further comprises transmitting a cloud services activation request to a gateway server. 6. The non-transitory computer-readable storage medium of claim 5, wherein the cloud services activation request includes a plurality of bundled cloud services activation requests to be unpackaged by the gateway server and to be relayed by the gateway server to a plurality of cloud services servers. 7. The non-transitory computer-readable storage medium of claim 1, wherein initiating the cloud services activation process further comprises activating additional cloud services. 8. The non-transitory computer-readable storage medium of claim 1, further storing instructions that, when executed by the processor, cause the processor to:
cause the printing device to connect to a network prior to concurrently performing the printing device activation process and the cloud credentials process. 9. A method comprising:
causing, by a computing system, a printing device to connect to a network: concurrently performing, by the computing system, a printing device activation process to activate the printing device and a cloud credentials process to receive a cloud authentication token from a gateway server; and initiating, by the computing system, a cloud services activation process to activate a cloud service for the printing device by transmitting a cloud services activation request to the gateway server. 10. The method of claim 9, further comprising:
receiving, by the computing system, an acknowledgement of successful cloud services activation from the gateway server. 11. The method of claim 9, wherein the cloud services activation request includes a plurality of bundled cloud services activation requests to be unpackaged by the gateway server and to be relayed by the gateway server to a plurality of cloud services servers. 12. The method of claim 9,
wherein the printing device activation process comprises gathering information about the printing device from the printing device, and enabling cloud services on the printing device; and wherein the cloud credentials process comprises transmitting a request to the gateway server to request the cloud authentication token, wherein the request includes a user authentication credential, and receiving the cloud authentication token responsive to the transmitted request. 13. The method of claim 9, wherein causing the printing device to connect to the network occurs prior to concurrently performing the printing device activation process and the cloud credentials process. 14. A computing system comprising:
a network connection setup module executable by a processing resource to cause a printing device to connect to a network; a printing device activation module executable by the processing resource to perform a printing device activation process, the printing device activation process comprising gathering information about the printing device from the printing device and enabling cloud services on the printing device; a cloud credentials module executable by the processing resource to perform a cloud credentials process, the cloud credentials process comprising transmitting a request to a gateway server to request a cloud authentication token and receiving the cloud authentication token responsive to the transmitted request; and a cloud services activation module executable by the processing resource to activate a cloud service for the printing device by transmitting a cloud services activation request to the gateway server, wherein the printing device activation process and the cloud credentials process are performed concurrently. 15. The computing system of claim 13, wherein causing the printing device to connect to the network occurs prior to concurrently performing the printing device activation process and the cloud credentials process. | 2,600 |
9,723 | 9,723 | 14,695,261 | 2,632 | Methods and systems are provided for power control in communications devices. Bonding of channels in communication devices may be dynamically adjusted, such as responsive to requests for bandwidth adjustment. For example, bonded channel configurations may be adjusted based on power, such as to single channel configurations (or to channel configurations with small number of channels, such as relative to current configurations) for low power operations. Components (or functions thereof) used in conjunction with receiving and/or processing bonded channels may be dynamically adjusted. Such dynamic adjustments may be performed, for example, such as to maintain required synchronization and system information to facilitate rapid data transfer resumption upon demand. | 1-22. (canceled) 23. A method comprising:
bonding a plurality of channels to create a first bonded channel set that comprise a primary channel and one or more secondary channels; responsive to a request for bandwidth adjustment, adjusting a number of channels in said first bonded channel set so as to define a second bonded channel set that comprises said primary channel; and processing data received via said primary channel or a secondary channel using timing related information determined based on said primary channel. 24. The method of claim 23, wherein said request for bandwidth adjustment is associated with a requirement to reduce power consumption. 25. The method of claim 23, wherein said request for bandwidth adjustment is associated with a requirement to reduce data throughput. 26. The method of claim 23, wherein said request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said first bonded channel set is low and is unlikely to increase for a period of time. 27. The method of claim 23 comprising:
receiving a second request for bandwidth adjustment; and
adjusting a number of channels in said second bonded channel set so as to define a third bonded channel set comprising said primary channel and one or more secondary channels. 28. The method of claim 27, wherein said second request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said second bonded channel set is likely to increase. 29. The method of claim 23, comprising dynamically adjusting a bandwidth of filtering applied to facilitate processing of said second bonded channel set. 30. The method of claim 23, comprising dynamically adjusting a clock rate of analog-to-digital conversion applied to facilitate processing of said second bonded channel set. 31. The method of claim 23, comprising:
monitoring received traffic; generating a statistical usage model based on said monitoring; and controlling a number of bonded channels based on said statistical usage model. 32. A device comprising:
circuitry operable to:
bond a plurality of channels to create a first bonded channel set that comprise a primary channel and one or more secondary channels;
responsive to a request for bandwidth adjustment, adjust a number of channels in said first bonded channel set so as to define a second bonded channel set that comprises said primary channel; and
process data received via said primary channel or a secondary channel using timing related information determined based on said primary channel. 33. The device of claim 32, wherein said request for bandwidth adjustment is associated with a requirement to reduce power consumption. 34. The device of claim 32, wherein said request for bandwidth adjustment is associated with a requirement to reduce data throughput. 35. The device of claim 32, wherein said request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said first bonded channel set is low and is unlikely to increase for a period of time. 36. The device of claim 32, wherein said circuitry is operable to:
receive a second request for bandwidth adjustment; and adjust a number of channels in said second bonded channel set so as to define a third bonded channel set comprising said primary channel and one or more secondary channels. 37. The device of claim 36, wherein said second request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said second bonded channel set is likely to increase. 38. The device of claim 32, wherein said circuitry is operable to dynamically adjust a bandwidth of a front-end filter in said device so as to facilitate processing of said second bonded channel set. 39. The device of claim 32, wherein said circuitry is operable to dynamically adjust a clock rate of an analog-to-digital converter in said device so as to facilitate processing of said second bonded channel set. 40. The device of claim 32, wherein said circuitry is operable to:
monitor traffic received in said device; generate a statistical usage model based on said monitoring; and control a number of bonded channels based on said statistical usage model. | Methods and systems are provided for power control in communications devices. Bonding of channels in communication devices may be dynamically adjusted, such as responsive to requests for bandwidth adjustment. For example, bonded channel configurations may be adjusted based on power, such as to single channel configurations (or to channel configurations with small number of channels, such as relative to current configurations) for low power operations. Components (or functions thereof) used in conjunction with receiving and/or processing bonded channels may be dynamically adjusted. Such dynamic adjustments may be performed, for example, such as to maintain required synchronization and system information to facilitate rapid data transfer resumption upon demand.1-22. (canceled) 23. A method comprising:
bonding a plurality of channels to create a first bonded channel set that comprise a primary channel and one or more secondary channels; responsive to a request for bandwidth adjustment, adjusting a number of channels in said first bonded channel set so as to define a second bonded channel set that comprises said primary channel; and processing data received via said primary channel or a secondary channel using timing related information determined based on said primary channel. 24. The method of claim 23, wherein said request for bandwidth adjustment is associated with a requirement to reduce power consumption. 25. The method of claim 23, wherein said request for bandwidth adjustment is associated with a requirement to reduce data throughput. 26. The method of claim 23, wherein said request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said first bonded channel set is low and is unlikely to increase for a period of time. 27. The method of claim 23 comprising:
receiving a second request for bandwidth adjustment; and
adjusting a number of channels in said second bonded channel set so as to define a third bonded channel set comprising said primary channel and one or more secondary channels. 28. The method of claim 27, wherein said second request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said second bonded channel set is likely to increase. 29. The method of claim 23, comprising dynamically adjusting a bandwidth of filtering applied to facilitate processing of said second bonded channel set. 30. The method of claim 23, comprising dynamically adjusting a clock rate of analog-to-digital conversion applied to facilitate processing of said second bonded channel set. 31. The method of claim 23, comprising:
monitoring received traffic; generating a statistical usage model based on said monitoring; and controlling a number of bonded channels based on said statistical usage model. 32. A device comprising:
circuitry operable to:
bond a plurality of channels to create a first bonded channel set that comprise a primary channel and one or more secondary channels;
responsive to a request for bandwidth adjustment, adjust a number of channels in said first bonded channel set so as to define a second bonded channel set that comprises said primary channel; and
process data received via said primary channel or a secondary channel using timing related information determined based on said primary channel. 33. The device of claim 32, wherein said request for bandwidth adjustment is associated with a requirement to reduce power consumption. 34. The device of claim 32, wherein said request for bandwidth adjustment is associated with a requirement to reduce data throughput. 35. The device of claim 32, wherein said request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said first bonded channel set is low and is unlikely to increase for a period of time. 36. The device of claim 32, wherein said circuitry is operable to:
receive a second request for bandwidth adjustment; and adjust a number of channels in said second bonded channel set so as to define a third bonded channel set comprising said primary channel and one or more secondary channels. 37. The device of claim 36, wherein said second request for bandwidth adjustment is generated in response to a determination that bandwidth usage on said second bonded channel set is likely to increase. 38. The device of claim 32, wherein said circuitry is operable to dynamically adjust a bandwidth of a front-end filter in said device so as to facilitate processing of said second bonded channel set. 39. The device of claim 32, wherein said circuitry is operable to dynamically adjust a clock rate of an analog-to-digital converter in said device so as to facilitate processing of said second bonded channel set. 40. The device of claim 32, wherein said circuitry is operable to:
monitor traffic received in said device; generate a statistical usage model based on said monitoring; and control a number of bonded channels based on said statistical usage model. | 2,600 |
9,724 | 9,724 | 15,589,994 | 2,643 | Embodiments provide a schema for representing data usage plans and data usage statistics. The data usage plan describes threshold values associated with network connections of computing devices of the user. A web service dynamically generates data usage statistics for the computing devices to represent data consumed by the computing devices under the data usage plan. The schema is updated with the data usage statistics and distributed to the computing devices for presentation to the user. | 1. A system comprising:
a memory area associated with a mobile computing device; and a processor programmed to:
monitor active network interfaces to maintain data usage for each network interface per an application;
dynamically generate data usage statistics representing network data consumed under a user data usage plan, at least part of the data usage statistics being dynamically generated, at the mobile computing device, based on network data consumed by the mobile computing device, the dynamically generated data usage statistics indicating a per application breakout of the network data consumed over a subset of the network interfaces, the user data usage plan describing threshold values associated with one or more network connections of the mobile computing device; and
send, via a web service, at least a portion of the dynamically generated data usage statistics to a second computing device. 2. The system of claim 1, wherein said memory area stores a schema representing the user data usage plan. 3. The system of claim 2, wherein the schema includes a plurality of fields each comprising one or more of the following: peak times, off-peak times, peak time data consumption quota, off-peak time data consumption quota, peak time data consumption remaining, off-peak time data consumption remaining, a roaming rate, a mobile operator name, a billing cycle type, and a network connection type. 4. The system of claim 1, wherein each application is executed on a the mobile computing device. 5. The system of claim 1, wherein the mobile computing device and the second mobile computing device share the user data usage plan. 6. The system of claim 4, wherein the user data usage plan describes threshold values associated with one or more network connections of the mobile computing device or the one or more other computing devices, wherein the threshold values correspond to maximum data consumption allotted under the user data usage plan for one or more network connections. 7. The system of claim 1, wherein the processor is further programmed to present the the dynamically generated data usage statistics in a user interface. 8. The system of claim 1, wherein the network data statistics further comprise one or more of the following: time and date of the network data consumption, location of the network data consumption, network interface used, subscriber identity module (SIM) card or other user identity module used for dual SIM, an international mobile station equipment identity (IMEI), internet protocol (IP) or other address of an access point, and an application responsible for the network data consumption. 9. A method comprising:
monitoring active network interfaces to maintain data usage for each network interface per an application; dynamically generating data usage statistics representing network data consumed under a user data usage plan, at least part of the data usage statistics being dynamically generated, at the mobile computing device, based on network data consumed by the mobile computing device, the dynamically generated data usage statistics indicating a per application breakout of the network data consumed over a subset of the network interfaces, the user data usage plan describing threshold values associated with one or more network connections of the mobile computing device; and sending, via a web service, at least a portion of the dynamically generated data usage statistics to a second computing device. 10. The method of claim 9, wherein a user interface is displayed differently for different types of user data usage plans. 11. The method of claim 9, wherein a user interface displays a plurality of user interface elements which are updated with an update in the generated data usage statistics. 12. The method of claim 9, wherein the generated data usage statistics include an amount of remaining network data for consumption and a quantity of time remaining for consumption of the remaining network data. 13. The method of claim 9, wherein dynamically generating the data usage statistics comprises receiving network data consumed by one or more computing devices. 14. The method of claim 9, wherein a user interface displays a user interface element which allows the user to select options or other configuration settings for receiving notifications. 15. One or more computer storage media embodying computer-executable instructions, that when executed by one or more processors, cause the one or more processors to perform operations comprising:
monitoring active network interfaces to maintain data usage for each network interface per an application; dynamically generating data usage statistics representing network data consumed under a user data usage plan, at least part of the the data usage statistics being dynamically generated, at a mobile computing device, based on network data consumed by the mobile computing device, the dynamically generated data usage statistics indicating a per application breakout of the network data consumed over a subset of the network interfaces, the user data usage plan describing threshold values associated with one or more network connections of the mobile computing device; and sending, via a web service, at least a portion of the dynamically generated data usage statistics to a second computing device. 16. The computer storage media of claim 15, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying a user interface differently for different types of user data usage plans. 17. The computer storage media of claim 15,
wherein the mobile computing device and the second mobile computing device share the user data usage plan. 18. The computer storage media of claim 17, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying the data usage statistics associated with the mobile computing device and data usage statistics associated with the second computing devices, which share the user data usage plan, in separate user interface elements. 19. The computer storage media of claim 16, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying the data usage statistics in a user interface element on a home screen of the mobile computing device. 20. The computer storage media of claim 16, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying one or more of the following:
threshold values associated with a plurality of network connections of the mobile computing device, an amount of data currently consumed under the user data usage plan, and how much data consumption is remaining and over which of the plurality of the network connections. | Embodiments provide a schema for representing data usage plans and data usage statistics. The data usage plan describes threshold values associated with network connections of computing devices of the user. A web service dynamically generates data usage statistics for the computing devices to represent data consumed by the computing devices under the data usage plan. The schema is updated with the data usage statistics and distributed to the computing devices for presentation to the user.1. A system comprising:
a memory area associated with a mobile computing device; and a processor programmed to:
monitor active network interfaces to maintain data usage for each network interface per an application;
dynamically generate data usage statistics representing network data consumed under a user data usage plan, at least part of the data usage statistics being dynamically generated, at the mobile computing device, based on network data consumed by the mobile computing device, the dynamically generated data usage statistics indicating a per application breakout of the network data consumed over a subset of the network interfaces, the user data usage plan describing threshold values associated with one or more network connections of the mobile computing device; and
send, via a web service, at least a portion of the dynamically generated data usage statistics to a second computing device. 2. The system of claim 1, wherein said memory area stores a schema representing the user data usage plan. 3. The system of claim 2, wherein the schema includes a plurality of fields each comprising one or more of the following: peak times, off-peak times, peak time data consumption quota, off-peak time data consumption quota, peak time data consumption remaining, off-peak time data consumption remaining, a roaming rate, a mobile operator name, a billing cycle type, and a network connection type. 4. The system of claim 1, wherein each application is executed on a the mobile computing device. 5. The system of claim 1, wherein the mobile computing device and the second mobile computing device share the user data usage plan. 6. The system of claim 4, wherein the user data usage plan describes threshold values associated with one or more network connections of the mobile computing device or the one or more other computing devices, wherein the threshold values correspond to maximum data consumption allotted under the user data usage plan for one or more network connections. 7. The system of claim 1, wherein the processor is further programmed to present the the dynamically generated data usage statistics in a user interface. 8. The system of claim 1, wherein the network data statistics further comprise one or more of the following: time and date of the network data consumption, location of the network data consumption, network interface used, subscriber identity module (SIM) card or other user identity module used for dual SIM, an international mobile station equipment identity (IMEI), internet protocol (IP) or other address of an access point, and an application responsible for the network data consumption. 9. A method comprising:
monitoring active network interfaces to maintain data usage for each network interface per an application; dynamically generating data usage statistics representing network data consumed under a user data usage plan, at least part of the data usage statistics being dynamically generated, at the mobile computing device, based on network data consumed by the mobile computing device, the dynamically generated data usage statistics indicating a per application breakout of the network data consumed over a subset of the network interfaces, the user data usage plan describing threshold values associated with one or more network connections of the mobile computing device; and sending, via a web service, at least a portion of the dynamically generated data usage statistics to a second computing device. 10. The method of claim 9, wherein a user interface is displayed differently for different types of user data usage plans. 11. The method of claim 9, wherein a user interface displays a plurality of user interface elements which are updated with an update in the generated data usage statistics. 12. The method of claim 9, wherein the generated data usage statistics include an amount of remaining network data for consumption and a quantity of time remaining for consumption of the remaining network data. 13. The method of claim 9, wherein dynamically generating the data usage statistics comprises receiving network data consumed by one or more computing devices. 14. The method of claim 9, wherein a user interface displays a user interface element which allows the user to select options or other configuration settings for receiving notifications. 15. One or more computer storage media embodying computer-executable instructions, that when executed by one or more processors, cause the one or more processors to perform operations comprising:
monitoring active network interfaces to maintain data usage for each network interface per an application; dynamically generating data usage statistics representing network data consumed under a user data usage plan, at least part of the the data usage statistics being dynamically generated, at a mobile computing device, based on network data consumed by the mobile computing device, the dynamically generated data usage statistics indicating a per application breakout of the network data consumed over a subset of the network interfaces, the user data usage plan describing threshold values associated with one or more network connections of the mobile computing device; and sending, via a web service, at least a portion of the dynamically generated data usage statistics to a second computing device. 16. The computer storage media of claim 15, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying a user interface differently for different types of user data usage plans. 17. The computer storage media of claim 15,
wherein the mobile computing device and the second mobile computing device share the user data usage plan. 18. The computer storage media of claim 17, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying the data usage statistics associated with the mobile computing device and data usage statistics associated with the second computing devices, which share the user data usage plan, in separate user interface elements. 19. The computer storage media of claim 16, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying the data usage statistics in a user interface element on a home screen of the mobile computing device. 20. The computer storage media of claim 16, further comprising executable instructions, that when executed by one or more processors, cause the one or more processors to perform further operations comprising displaying one or more of the following:
threshold values associated with a plurality of network connections of the mobile computing device, an amount of data currently consumed under the user data usage plan, and how much data consumption is remaining and over which of the plurality of the network connections. | 2,600 |
9,725 | 9,725 | 11,968,051 | 2,621 | In one aspect of the invention, a graphical user interface on a portable multifunction device with a touch screen display comprises: an hour column comprising a sequence of hour numbers; a minute column comprising a sequence of minute numbers; and a selection row that intersects the hour column and the minute column. In response to detecting a gesture on the hour column, the hour numbers in the hour column are scrolled without scrolling the minute numbers in the minute column. In response to detecting a gesture on the minute column, the minute numbers in the minute column are scrolled without scrolling the hour numbers in the hour column. The single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, are used as time input for a function or application on the multifunction device. | 1. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, wherein the hour numbers form a continuous loop in the hour column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, wherein the minute numbers form a continuous loop in the minute column; and using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 2. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column; and using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 3. The computer-implemented method of claim 2, wherein the respective date in the sequence of dates further comprises a day of the week corresponding to the name of the month and the date number of the day within the month. 4. The computer-implemented method of claim 2, wherein the gesture on the date column is a finger gesture. 5. The computer-implemented method of claim 2, wherein the gesture on the date column is a substantially vertical swipe. 6. The computer-implemented method of claim 2, wherein the gesture on the hour column is a finger gesture. 7. The computer-implemented method of claim 2, wherein the gesture on the hour column is a substantially vertical swipe. 8. The computer-implemented method of claim 2, wherein the gesture on the minute column is a finger gesture. 9. The computer-implemented method of claim 2, wherein the gesture on the minute column is a substantially vertical swipe. 10. The computer-implemented method of claim 2, wherein the hour numbers form a continuous loop in the hour column. 11. The computer-implemented method of claim 2, wherein the minute numbers form a continuous loop in the minute column. 12. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; an hour column comprising a sequence of hour numbers; a minute column comprising a sequence of minute numbers; a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; wherein:
in response to detecting a gesture on the date column, the dates in the date column are scrolled without scrolling the hour numbers in the hour column or the minute numbers in the minute column;
in response to detecting a gesture on the hour column, the hour numbers in the hour column are scrolled without scrolling the dates in the date column or the minute numbers in the minute column;
in response to detecting a gesture on the minute column, the minute numbers in the minute column are scrolled without scrolling the dates in the date column or the hour numbers in the hour column; and
after the dates, the hour numbers and the minute numbers, respectively, have been scrolled, the single date, the single hour number, and the single minute number in the selection row are used as time input for a function or application on the multifunction device. 13. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month;
instructions for displaying an hour column comprising a sequence of hour numbers;
instructions for displaying a minute column comprising a sequence of minute numbers;
instructions for displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number;
instructions for detecting a gesture on the date column;
instructions for scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column, in response to detecting the gesture on the date column;
instructions for detecting a gesture on the hour column;
instructions for scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, in response to detecting the gesture on the hour column;
instructions for detecting a gesture on the minute column;
instructions for scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, in response to detecting the gesture on the minute column; and
instructions for using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 14. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; display an hour column comprising a sequence of hour numbers; display a minute column comprising a sequence of minute numbers; display a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; detect a gesture on the date column; scroll the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column, in response to detecting the gesture on the date column; detect a gesture on the hour column; scroll the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, in response to detecting the gesture on the hour column; detect a gesture on the minute column; scroll the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, in response to detecting the gesture on the minute column; and use the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 15. A portable multifunction device with a touch screen display, comprising:
means for displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; means for displaying an hour column comprising a sequence of hour numbers; means for displaying a minute column comprising a sequence of minute numbers; means for displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; means for detecting a gesture on the date column; means for scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column, in response to detecting the gesture on the date column; means for detecting a gesture on the hour column; means for scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, in response to detecting the gesture on the hour column; means for detecting a gesture on the minute column; means for scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, in response to detecting the gesture on the minute column; and means for using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 16. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column, wherein the hour numbers form a continuous loop in the hour column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column, wherein the minute numbers form a continuous loop in the minute column; and using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 17. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column; and using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 18. The computer-implemented method of claim 17, wherein the gesture on the hour column is a finger gesture. 19. The computer-implemented method of claim 17, wherein the gesture on the hour column is a substantially vertical swipe. 20. The computer-implemented method of claim 17, wherein the gesture on the minute column is a finger gesture. 21. The computer-implemented method of claim 17, wherein the gesture on the minute column is a substantially vertical swipe. 22. The computer-implemented method of claim 17, wherein the hour numbers form a continuous loop in the hour column. 23. The computer-implemented method of claim 17, wherein the minute numbers form a continuous loop in the minute column. 24. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a seconds column comprising a sequence of seconds numbers; displaying a selection row that intersects the hour column, the minute column, and the seconds column and contains a single hour number, a single minute number and a single seconds number; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column; detecting a gesture on the seconds column; in response to detecting the gesture on the seconds column, scrolling the seconds numbers in the seconds column without scrolling the minute numbers in the minute column; and using the single hour number, the single minute number, and the single seconds number in the selection row after scrolling the hour numbers, the minutes numbers, and the seconds numbers, respectively, as time input for a function or application on the multifunction device. 25. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
an hour column comprising a sequence of hour numbers; a minute column comprising a sequence of minute numbers; and a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; wherein:
in response to detecting a gesture on the hour column, the hour numbers in the hour column are scrolled without scrolling the minute numbers in the minute column;
in response to detecting a gesture on the minute column, the minute numbers in the minute column are scrolled without scrolling the hour numbers in the hour column; and
the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, are used as time input for a function or an application on the multifunction device. 26. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying an hour column comprising a sequence of hour numbers;
instructions for displaying a minute column comprising a sequence of minute numbers;
instructions for displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number;
instructions for detecting a gesture on the hour column;
instructions for scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column, in response to detecting the gesture on the hour column;
instructions for detecting a gesture on the minute column;
instructions for scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column, in response to detecting the gesture on the minute column; and
instructions for using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 27. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display an hour column comprising a sequence of hour numbers; display a minute column comprising a sequence of minute numbers; display a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; detect a gesture on the hour column; scroll the hour numbers in the hour column without scrolling the minute numbers in the minute column, in response to detecting the gesture on the hour column; detect a gesture on the minute column; scroll the minute numbers in the minute column without scrolling the hour numbers in the hour column, in response to detecting the gesture on the hour column; and use the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 28. A portable multifunction device with a touch screen display, comprising:
means for displaying an hour column comprising a sequence of hour numbers; means for displaying a minute column comprising a sequence of minute numbers; means for displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; means for detecting a gesture on the hour column; means for scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column, in response to detecting the gesture on the hour column; means for detecting a gesture on the minute column; means for scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column, in response to detecting the gesture on the minute column; and means for using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 29. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, simultaneously displaying
a month column comprising a sequence of month identifiers;
a date column comprising a sequence of date numbers; and
a selection row that intersects the month column and the date column and contains a single month identifier and a single date number;
detecting a gesture on the month column; in response to detecting the gesture on the month column, scrolling the month identifiers in the month column without scrolling the date numbers in the date column, wherein the month identifiers form a continuous loop in the month column; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the date numbers in the date column without scrolling the month identifiers in the month column, wherein the date numbers form a continuous loop in the date column; and using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for an application on the multifunction device. 30. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a month column comprising a sequence of month identifiers; displaying a date column comprising a sequence of date numbers; displaying a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; detecting a gesture on the month column; in response to detecting the gesture on the month column, scrolling the month identifiers in the month column without scrolling the date numbers in the date column; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the date numbers in the date column without scrolling the month identifiers in the month column; and using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 31. The computer-implemented method of claim 30, wherein the gesture on the month column is a finger gesture. 32. The computer-implemented method of claim 30, wherein the gesture on the month column is a substantially vertical swipe. 33. The computer-implemented method of claim 30, wherein the gesture on the month column is a substantially vertical gesture on or near the month column. 34. The computer-implemented method of claim 30, wherein the gesture on the date column is a finger gesture. 35. The computer-implemented method of claim 30, wherein the gesture on the date column is a substantially vertical swipe. 36. The computer-implemented method of claim 30, wherein the gesture on the date column is a substantially vertical gesture on or near the date column. 37. The computer-implemented method of claim 30, wherein the month identifiers form a continuous loop in the month column. 38. The computer-implemented method of claim 30, wherein the date numbers form a continuous loop in the date column. 39. The computer-implemented method of claim 30, wherein the month column, date column and selection row are simultaneously displayed. 40. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
a month column comprising a sequence of month identifiers; a date column comprising a sequence of date numbers; and a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; wherein:
in response to detecting a gesture on the month column, the month identifiers in the month column are scrolled without scrolling the date numbers in the date column;
in response to detecting a gesture on the date column, the date numbers in the date column are scrolled without scrolling the month identifiers in the month column; and
the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, are used as date input for a function or application on the multifunction device. 41. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying a month column comprising a sequence of month identifiers;
instructions for displaying a date column comprising a sequence of date numbers;
instructions for displaying a selection row that intersects the month column and the date column and contains a single month identifier and a single date number;
instructions for detecting a gesture on the month column;
instructions for scrolling the month identifiers in the month column without scrolling the date numbers in the date column, in response to detecting the gesture on the month column;
instructions for detecting a gesture on the date column;
instructions for scrolling the date numbers in the date column without scrolling the month identifiers in the month column, in response to detecting the gesture on the date column; and
instructions for using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 42. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display a month column comprising a sequence of month identifiers; display a date column comprising a sequence of date numbers; display a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; detect a gesture on the month column; scroll the month identifiers in the month column without scrolling the date numbers in the date column, in response to detecting the gesture on the month column; detect a gesture on the date column; scroll the date numbers in the date column without scrolling the month identifiers in the month column, in response to detecting the gesture on the date column; and use the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 43. A portable multifunction device with a touch screen display, comprising:
means for displaying a month column comprising a sequence of month identifiers; means for displaying a date column comprising a sequence of date numbers; means for displaying a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; means for detecting a gesture on the month column; means for scrolling the month identifiers in the month column without scrolling the date numbers in the date column, in response to detecting the gesture on the month column; means for detecting a gesture on the date column; means for scrolling the date numbers in the date column without scrolling the month identifiers in the month column, in response to detecting the gesture on the date column; and means for using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 44. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; displaying a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; detecting a gesture on a respective column; in response to detecting the gesture on the respective column, scrolling the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; repeating the detecting and scrolling with respect to another respective column; and using the multi-component time value as a time input for a function or application on the multifunction device. 45. The computer-implemented method of claim 44, wherein the plurality of columns includes at least three of: a month column, a date column, a combined month and date column, an hour column, a minute column, a seconds column, and an AM/PM column. 46. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; and a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; wherein: in response to detecting a gesture on the respective column, the values in the respective column are scrolled without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; the detecting and scrolling are repeated with respect to another respective column; and the multi-component time value is used as a time input for a function or application on the multifunction device. 47. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns;
instructions for displaying a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value;
instructions for detecting a gesture on a respective column;
instructions for, in response to detecting the gesture on the respective column, scrolling the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row;
instructions for repeating the detecting and scrolling with respect to another respective column; and
instructions for using the multi-component time value as a time input for a function or application on the multifunction device. 48. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; display a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; detect a gesture on a respective column; in response to detecting the gesture on the respective column, scroll the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; repeat the detecting and scrolling with respect to another respective column; and use the multi-component time value as a time input for a function or application on the multifunction device. 49. A portable multifunction device with a touch screen display, comprising:
means for displaying a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; means for displaying a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; means for detecting a gesture on a respective column; means for, in response to detecting the gesture on the respective column, scrolling the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; means for repeating the detecting and scrolling with respect to another respective column; and means for using the multi-component time value as a time input for a function or application on the multifunction device. | In one aspect of the invention, a graphical user interface on a portable multifunction device with a touch screen display comprises: an hour column comprising a sequence of hour numbers; a minute column comprising a sequence of minute numbers; and a selection row that intersects the hour column and the minute column. In response to detecting a gesture on the hour column, the hour numbers in the hour column are scrolled without scrolling the minute numbers in the minute column. In response to detecting a gesture on the minute column, the minute numbers in the minute column are scrolled without scrolling the hour numbers in the hour column. The single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, are used as time input for a function or application on the multifunction device.1. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, wherein the hour numbers form a continuous loop in the hour column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, wherein the minute numbers form a continuous loop in the minute column; and using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 2. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column; and using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 3. The computer-implemented method of claim 2, wherein the respective date in the sequence of dates further comprises a day of the week corresponding to the name of the month and the date number of the day within the month. 4. The computer-implemented method of claim 2, wherein the gesture on the date column is a finger gesture. 5. The computer-implemented method of claim 2, wherein the gesture on the date column is a substantially vertical swipe. 6. The computer-implemented method of claim 2, wherein the gesture on the hour column is a finger gesture. 7. The computer-implemented method of claim 2, wherein the gesture on the hour column is a substantially vertical swipe. 8. The computer-implemented method of claim 2, wherein the gesture on the minute column is a finger gesture. 9. The computer-implemented method of claim 2, wherein the gesture on the minute column is a substantially vertical swipe. 10. The computer-implemented method of claim 2, wherein the hour numbers form a continuous loop in the hour column. 11. The computer-implemented method of claim 2, wherein the minute numbers form a continuous loop in the minute column. 12. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; an hour column comprising a sequence of hour numbers; a minute column comprising a sequence of minute numbers; a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; wherein:
in response to detecting a gesture on the date column, the dates in the date column are scrolled without scrolling the hour numbers in the hour column or the minute numbers in the minute column;
in response to detecting a gesture on the hour column, the hour numbers in the hour column are scrolled without scrolling the dates in the date column or the minute numbers in the minute column;
in response to detecting a gesture on the minute column, the minute numbers in the minute column are scrolled without scrolling the dates in the date column or the hour numbers in the hour column; and
after the dates, the hour numbers and the minute numbers, respectively, have been scrolled, the single date, the single hour number, and the single minute number in the selection row are used as time input for a function or application on the multifunction device. 13. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month;
instructions for displaying an hour column comprising a sequence of hour numbers;
instructions for displaying a minute column comprising a sequence of minute numbers;
instructions for displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number;
instructions for detecting a gesture on the date column;
instructions for scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column, in response to detecting the gesture on the date column;
instructions for detecting a gesture on the hour column;
instructions for scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, in response to detecting the gesture on the hour column;
instructions for detecting a gesture on the minute column;
instructions for scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, in response to detecting the gesture on the minute column; and
instructions for using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 14. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; display an hour column comprising a sequence of hour numbers; display a minute column comprising a sequence of minute numbers; display a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; detect a gesture on the date column; scroll the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column, in response to detecting the gesture on the date column; detect a gesture on the hour column; scroll the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, in response to detecting the gesture on the hour column; detect a gesture on the minute column; scroll the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, in response to detecting the gesture on the minute column; and use the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 15. A portable multifunction device with a touch screen display, comprising:
means for displaying a date column comprising a sequence of dates, wherein a respective date in the sequence of dates comprises a name of a month and a date number of a day within the month; means for displaying an hour column comprising a sequence of hour numbers; means for displaying a minute column comprising a sequence of minute numbers; means for displaying a selection row that intersects the date column, the hour column, and the minute column and contains a single date, a single hour number, and a single minute number; means for detecting a gesture on the date column; means for scrolling the dates in the date column without scrolling the hour numbers in the hour column or the minute numbers in the minute column, in response to detecting the gesture on the date column; means for detecting a gesture on the hour column; means for scrolling the hour numbers in the hour column without scrolling the dates in the date column or the minute numbers in the minute column, in response to detecting the gesture on the hour column; means for detecting a gesture on the minute column; means for scrolling the minute numbers in the minute column without scrolling the dates in the date column or the hour numbers in the hour column, in response to detecting the gesture on the minute column; and means for using the single date, the single hour number, and the single minute number in the selection row after scrolling the dates, the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 16. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column, wherein the hour numbers form a continuous loop in the hour column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column, wherein the minute numbers form a continuous loop in the minute column; and using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 17. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column; and using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 18. The computer-implemented method of claim 17, wherein the gesture on the hour column is a finger gesture. 19. The computer-implemented method of claim 17, wherein the gesture on the hour column is a substantially vertical swipe. 20. The computer-implemented method of claim 17, wherein the gesture on the minute column is a finger gesture. 21. The computer-implemented method of claim 17, wherein the gesture on the minute column is a substantially vertical swipe. 22. The computer-implemented method of claim 17, wherein the hour numbers form a continuous loop in the hour column. 23. The computer-implemented method of claim 17, wherein the minute numbers form a continuous loop in the minute column. 24. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying an hour column comprising a sequence of hour numbers; displaying a minute column comprising a sequence of minute numbers; displaying a seconds column comprising a sequence of seconds numbers; displaying a selection row that intersects the hour column, the minute column, and the seconds column and contains a single hour number, a single minute number and a single seconds number; detecting a gesture on the hour column; in response to detecting the gesture on the hour column, scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column; detecting a gesture on the minute column; in response to detecting the gesture on the minute column, scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column; detecting a gesture on the seconds column; in response to detecting the gesture on the seconds column, scrolling the seconds numbers in the seconds column without scrolling the minute numbers in the minute column; and using the single hour number, the single minute number, and the single seconds number in the selection row after scrolling the hour numbers, the minutes numbers, and the seconds numbers, respectively, as time input for a function or application on the multifunction device. 25. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
an hour column comprising a sequence of hour numbers; a minute column comprising a sequence of minute numbers; and a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; wherein:
in response to detecting a gesture on the hour column, the hour numbers in the hour column are scrolled without scrolling the minute numbers in the minute column;
in response to detecting a gesture on the minute column, the minute numbers in the minute column are scrolled without scrolling the hour numbers in the hour column; and
the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, are used as time input for a function or an application on the multifunction device. 26. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying an hour column comprising a sequence of hour numbers;
instructions for displaying a minute column comprising a sequence of minute numbers;
instructions for displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number;
instructions for detecting a gesture on the hour column;
instructions for scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column, in response to detecting the gesture on the hour column;
instructions for detecting a gesture on the minute column;
instructions for scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column, in response to detecting the gesture on the minute column; and
instructions for using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 27. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display an hour column comprising a sequence of hour numbers; display a minute column comprising a sequence of minute numbers; display a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; detect a gesture on the hour column; scroll the hour numbers in the hour column without scrolling the minute numbers in the minute column, in response to detecting the gesture on the hour column; detect a gesture on the minute column; scroll the minute numbers in the minute column without scrolling the hour numbers in the hour column, in response to detecting the gesture on the hour column; and use the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 28. A portable multifunction device with a touch screen display, comprising:
means for displaying an hour column comprising a sequence of hour numbers; means for displaying a minute column comprising a sequence of minute numbers; means for displaying a selection row that intersects the hour column and the minute column and contains a single hour number and a single minute number; means for detecting a gesture on the hour column; means for scrolling the hour numbers in the hour column without scrolling the minute numbers in the minute column, in response to detecting the gesture on the hour column; means for detecting a gesture on the minute column; means for scrolling the minute numbers in the minute column without scrolling the hour numbers in the hour column, in response to detecting the gesture on the minute column; and means for using the single hour number and the single minute number in the selection row after scrolling the hour numbers and the minute numbers, respectively, as time input for a function or application on the multifunction device. 29. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, simultaneously displaying
a month column comprising a sequence of month identifiers;
a date column comprising a sequence of date numbers; and
a selection row that intersects the month column and the date column and contains a single month identifier and a single date number;
detecting a gesture on the month column; in response to detecting the gesture on the month column, scrolling the month identifiers in the month column without scrolling the date numbers in the date column, wherein the month identifiers form a continuous loop in the month column; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the date numbers in the date column without scrolling the month identifiers in the month column, wherein the date numbers form a continuous loop in the date column; and using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for an application on the multifunction device. 30. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a month column comprising a sequence of month identifiers; displaying a date column comprising a sequence of date numbers; displaying a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; detecting a gesture on the month column; in response to detecting the gesture on the month column, scrolling the month identifiers in the month column without scrolling the date numbers in the date column; detecting a gesture on the date column; in response to detecting the gesture on the date column, scrolling the date numbers in the date column without scrolling the month identifiers in the month column; and using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 31. The computer-implemented method of claim 30, wherein the gesture on the month column is a finger gesture. 32. The computer-implemented method of claim 30, wherein the gesture on the month column is a substantially vertical swipe. 33. The computer-implemented method of claim 30, wherein the gesture on the month column is a substantially vertical gesture on or near the month column. 34. The computer-implemented method of claim 30, wherein the gesture on the date column is a finger gesture. 35. The computer-implemented method of claim 30, wherein the gesture on the date column is a substantially vertical swipe. 36. The computer-implemented method of claim 30, wherein the gesture on the date column is a substantially vertical gesture on or near the date column. 37. The computer-implemented method of claim 30, wherein the month identifiers form a continuous loop in the month column. 38. The computer-implemented method of claim 30, wherein the date numbers form a continuous loop in the date column. 39. The computer-implemented method of claim 30, wherein the month column, date column and selection row are simultaneously displayed. 40. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
a month column comprising a sequence of month identifiers; a date column comprising a sequence of date numbers; and a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; wherein:
in response to detecting a gesture on the month column, the month identifiers in the month column are scrolled without scrolling the date numbers in the date column;
in response to detecting a gesture on the date column, the date numbers in the date column are scrolled without scrolling the month identifiers in the month column; and
the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, are used as date input for a function or application on the multifunction device. 41. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying a month column comprising a sequence of month identifiers;
instructions for displaying a date column comprising a sequence of date numbers;
instructions for displaying a selection row that intersects the month column and the date column and contains a single month identifier and a single date number;
instructions for detecting a gesture on the month column;
instructions for scrolling the month identifiers in the month column without scrolling the date numbers in the date column, in response to detecting the gesture on the month column;
instructions for detecting a gesture on the date column;
instructions for scrolling the date numbers in the date column without scrolling the month identifiers in the month column, in response to detecting the gesture on the date column; and
instructions for using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 42. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display a month column comprising a sequence of month identifiers; display a date column comprising a sequence of date numbers; display a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; detect a gesture on the month column; scroll the month identifiers in the month column without scrolling the date numbers in the date column, in response to detecting the gesture on the month column; detect a gesture on the date column; scroll the date numbers in the date column without scrolling the month identifiers in the month column, in response to detecting the gesture on the date column; and use the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 43. A portable multifunction device with a touch screen display, comprising:
means for displaying a month column comprising a sequence of month identifiers; means for displaying a date column comprising a sequence of date numbers; means for displaying a selection row that intersects the month column and the date column and contains a single month identifier and a single date number; means for detecting a gesture on the month column; means for scrolling the month identifiers in the month column without scrolling the date numbers in the date column, in response to detecting the gesture on the month column; means for detecting a gesture on the date column; means for scrolling the date numbers in the date column without scrolling the month identifiers in the month column, in response to detecting the gesture on the date column; and means for using the single month identifier and the single date number in the selection row after scrolling the month identifiers and the date numbers, respectively, as date input for a function or application on the multifunction device. 44. A computer-implemented method, comprising:
at a portable multifunction device with a touch screen display, displaying a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; displaying a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; detecting a gesture on a respective column; in response to detecting the gesture on the respective column, scrolling the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; repeating the detecting and scrolling with respect to another respective column; and using the multi-component time value as a time input for a function or application on the multifunction device. 45. The computer-implemented method of claim 44, wherein the plurality of columns includes at least three of: a month column, a date column, a combined month and date column, an hour column, a minute column, a seconds column, and an AM/PM column. 46. A graphical user interface on a portable multifunction device with a touch screen display, comprising:
a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; and a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; wherein: in response to detecting a gesture on the respective column, the values in the respective column are scrolled without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; the detecting and scrolling are repeated with respect to another respective column; and the multi-component time value is used as a time input for a function or application on the multifunction device. 47. A portable multifunction device, comprising:
a touch screen display; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including:
instructions for displaying a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns;
instructions for displaying a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value;
instructions for detecting a gesture on a respective column;
instructions for, in response to detecting the gesture on the respective column, scrolling the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row;
instructions for repeating the detecting and scrolling with respect to another respective column; and
instructions for using the multi-component time value as a time input for a function or application on the multifunction device. 48. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable multifunction device with a touch screen display, cause the device to:
display a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; display a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; detect a gesture on a respective column; in response to detecting the gesture on the respective column, scroll the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; repeat the detecting and scrolling with respect to another respective column; and use the multi-component time value as a time input for a function or application on the multifunction device. 49. A portable multifunction device with a touch screen display, comprising:
means for displaying a plurality of columns, each comprising a sequence of time related values, wherein the plurality of columns includes at least three distinct columns; means for displaying a selection row that intersects each of the columns, the row containing a single value from each of the columns, the values in the row representing a multi-component time value; means for detecting a gesture on a respective column; means for, in response to detecting the gesture on the respective column, scrolling the values in the respective column without scrolling the values in the other columns so as to change the single value in the respective column that is displayed in the selection row; means for repeating the detecting and scrolling with respect to another respective column; and means for using the multi-component time value as a time input for a function or application on the multifunction device. | 2,600 |
9,726 | 9,726 | 14,818,170 | 2,696 | The various embodiments described herein include methods, devices, and systems for repurposing IR transmitters. In one aspect, a method is performed at a first electronic device with a camera, one or more IR transmitters, one or more processors, and memory coupled to the one or more processors. The method includes operating the first electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment. The method further includes operating the first electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. | 1. A method comprising:
at a first electronic device with a camera, one or more IR transmitters, one or more processors, and memory coupled to the one or more processors:
operating the first electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment;
operating the first electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. 2. The method of claim 1, wherein the one or more IR transmitters comprise one or more IR LEDs. 3. The method of claim 1, further comprising:
while operating the first electronic device in the second mode, receiving a signal from a third electronic device; and transmitting an IR signal corresponding to the received signal via the one or more IR transmitters. 4. The method of claim 3, wherein the received signal comprises an IR signal; and
wherein transmitting the IR signal corresponding to the received signal via the one or more IR transmitters comprises transmitting the IR signal with a signal strength greater than a signal strength of the received signal. 5. The method of claim 3, wherein the received signal comprises an IR signal and the IR signal is received via the camera. 6. The method of claim 3, wherein the received signal comprises an IR signal and the IR signal is received via an IR receiver of the first electronic device. 7. The method of claim 3, wherein the received signal comprises an RF signal and is received via an RF receiver of the first electronic device. 8. The method of claim 3, wherein the third electronic device is one of:
a remote control; and a mobile phone. 9. The method of claim 1, wherein illuminating the environment proximate the first electronic device via the at least one of the one or more IR transmitters comprises utilizing the one or more IR transmitters to provide illumination for the camera in accordance with a determination that a light level meets a predefined criterion. 10. The method of claim 9, wherein the predefined criterion comprises a low light threshold. 11. The method of claim 1, further comprising operating the first electronic device in a third mode, the third mode including utilizing the one or more IR transmitters to construct a depth map for a scene corresponding to a field of view of the camera. 12. The method of claim 1, further comprising:
at the first electronic device:
receiving an IR signal;
generating a non-IR signal corresponding to the IR signal; and
transmitting the non-IR signal to a third electronic device;
at the third electronic device:
receiving the non-IR signal;
reconstructing the IR signal based on the non-IR signal; and
transmitting the reconstructed IR signal. 13. The method of claim 1, further comprising:
at a hub device:
receiving a request to send an IR signal to the second electronic device;
determining which electronic device from a plurality of associated electronic devices is best-suited to send the IR signal to the second electronic device; and
in accordance with a determination that the first electronic device is best-suited, relaying the request to the first electronic device;
at the first electronic device:
receiving the relayed request;
generating the IR signal based on the relayed request; and
transmitting the generated IR signal to the second electronic device. 14. An electronic device, comprising:
a camera; one or more IR transmitters; one or more processors; and memory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for:
operating the electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment;
operating the electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. 15. The device of claim 14, the one or more programs further comprising instructions for:
while operating the electronic device in the second mode, receiving a signal from a third electronic device; and transmitting an IR signal corresponding to the received signal via the one or more IR transmitters. 16. The device of claim 14, the one or more programs further comprising instructions for:
receiving an IR signal; generating a non-IR signal corresponding to the received IR signal; and transmitting the non-IR signal to a third electronic device. 17. The device of claim 14, the one or more programs further comprising instructions for:
receiving a request to send an IR signal to the second electronic device; determining which electronic device from a plurality of associated electronic devices is best-suited to send the IR signal to the second electronic device; and in accordance with a determination that a third electronic device is best-suited, relaying the request to the third electronic device. 18. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by an electronic device with a camera, one or more IR transmitters, and one or more processors, cause the device to perform operations comprising:
operating the electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment; operating the electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. 19. The storage medium of claim 18, the one or more programs further comprising instructions for:
while operating the electronic device in the second mode, receiving a signal from a third electronic device; and transmitting an IR signal corresponding to the received signal via the one or more IR transmitters. 20. The storage medium of claim 18, the one or more programs further comprising instructions for:
receiving an IR signal; generating a non-IR signal corresponding to the received IR signal; and transmitting the non-IR signal to a third electronic device. | The various embodiments described herein include methods, devices, and systems for repurposing IR transmitters. In one aspect, a method is performed at a first electronic device with a camera, one or more IR transmitters, one or more processors, and memory coupled to the one or more processors. The method includes operating the first electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment. The method further includes operating the first electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters.1. A method comprising:
at a first electronic device with a camera, one or more IR transmitters, one or more processors, and memory coupled to the one or more processors:
operating the first electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment;
operating the first electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. 2. The method of claim 1, wherein the one or more IR transmitters comprise one or more IR LEDs. 3. The method of claim 1, further comprising:
while operating the first electronic device in the second mode, receiving a signal from a third electronic device; and transmitting an IR signal corresponding to the received signal via the one or more IR transmitters. 4. The method of claim 3, wherein the received signal comprises an IR signal; and
wherein transmitting the IR signal corresponding to the received signal via the one or more IR transmitters comprises transmitting the IR signal with a signal strength greater than a signal strength of the received signal. 5. The method of claim 3, wherein the received signal comprises an IR signal and the IR signal is received via the camera. 6. The method of claim 3, wherein the received signal comprises an IR signal and the IR signal is received via an IR receiver of the first electronic device. 7. The method of claim 3, wherein the received signal comprises an RF signal and is received via an RF receiver of the first electronic device. 8. The method of claim 3, wherein the third electronic device is one of:
a remote control; and a mobile phone. 9. The method of claim 1, wherein illuminating the environment proximate the first electronic device via the at least one of the one or more IR transmitters comprises utilizing the one or more IR transmitters to provide illumination for the camera in accordance with a determination that a light level meets a predefined criterion. 10. The method of claim 9, wherein the predefined criterion comprises a low light threshold. 11. The method of claim 1, further comprising operating the first electronic device in a third mode, the third mode including utilizing the one or more IR transmitters to construct a depth map for a scene corresponding to a field of view of the camera. 12. The method of claim 1, further comprising:
at the first electronic device:
receiving an IR signal;
generating a non-IR signal corresponding to the IR signal; and
transmitting the non-IR signal to a third electronic device;
at the third electronic device:
receiving the non-IR signal;
reconstructing the IR signal based on the non-IR signal; and
transmitting the reconstructed IR signal. 13. The method of claim 1, further comprising:
at a hub device:
receiving a request to send an IR signal to the second electronic device;
determining which electronic device from a plurality of associated electronic devices is best-suited to send the IR signal to the second electronic device; and
in accordance with a determination that the first electronic device is best-suited, relaying the request to the first electronic device;
at the first electronic device:
receiving the relayed request;
generating the IR signal based on the relayed request; and
transmitting the generated IR signal to the second electronic device. 14. An electronic device, comprising:
a camera; one or more IR transmitters; one or more processors; and memory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for:
operating the electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment;
operating the electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. 15. The device of claim 14, the one or more programs further comprising instructions for:
while operating the electronic device in the second mode, receiving a signal from a third electronic device; and transmitting an IR signal corresponding to the received signal via the one or more IR transmitters. 16. The device of claim 14, the one or more programs further comprising instructions for:
receiving an IR signal; generating a non-IR signal corresponding to the received IR signal; and transmitting the non-IR signal to a third electronic device. 17. The device of claim 14, the one or more programs further comprising instructions for:
receiving a request to send an IR signal to the second electronic device; determining which electronic device from a plurality of associated electronic devices is best-suited to send the IR signal to the second electronic device; and in accordance with a determination that a third electronic device is best-suited, relaying the request to the third electronic device. 18. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by an electronic device with a camera, one or more IR transmitters, and one or more processors, cause the device to perform operations comprising:
operating the electronic device in a first mode, the first mode including illuminating an environment proximate the first electronic device via at least one of the one or more IR transmitters to generate an image, via the camera, of at least a portion of the environment; operating the electronic device in a second mode, the second mode including communicating information to a second electronic device via at least one of the one or more IR transmitters. 19. The storage medium of claim 18, the one or more programs further comprising instructions for:
while operating the electronic device in the second mode, receiving a signal from a third electronic device; and transmitting an IR signal corresponding to the received signal via the one or more IR transmitters. 20. The storage medium of claim 18, the one or more programs further comprising instructions for:
receiving an IR signal; generating a non-IR signal corresponding to the received IR signal; and transmitting the non-IR signal to a third electronic device. | 2,600 |
9,727 | 9,727 | 13,721,286 | 2,698 | A method for enhancing an edge transition in a video signal comprising the steps of receiving a video signal including an edge transition, generating a correction signal for the edge transition, applying the correction signal to the video signal to produce a corrected signal and restricting the amplitude of the corrected signal to extend between extended maximum and minimum amplitude limits in dependence on the measured maximum and minimum amplitudes of a predefined pattern of pixels adjacent to the edge transition. | 1. A method for enhancing an edge transition in a video signal comprising the steps of:
receiving a video signal including an edge transition; generating a correction signal for the edge transition; applying the correction signal to the video signal to produce a corrected signal; measuring maximum and minimum amplitudes of a predefined pattern of pixels adjacent to the edge transition; and restricting the amplitude of the corrected signal to extend between extended maximum and minimum amplitude limits in dependence on the measured maximum and minimum amplitudes of the predefined pattern of pixels. 2. A method for enhancing an edge transition in a video signal according to claim 1 wherein the calculated extended limits are proportional to the difference between the measured maximum and minimum amplitudes of the predetermined pattern of pixels adjacent to the edge transition. 3. A method for enhancing an edge transition in a video signal according to claim 1 comprising the further step of compressing the extended maximum and/or minimum amplitude limits in dependence on the position of the edge transition within a permitted signal range. 4. A method for enhancing an edge transition in a video signal according to claim 3 wherein the extended maximum and minimum amplitude limits are compressed using a ramp function. 5. A method for enhancing an edge transition in a video signal according to claim 3 wherein the extended maximum and minimum amplitude limits are compressed using a spline function. 6. A method for enhancing an edge transition in a video signal according to claim 1 wherein the correction signal is a second differential of the original signal. 7. A method for enhancing an edge transition in a video signal according to claim 1 wherein the predetermined pattern of pixels is a symmetric measurement window about the edge transition. 8. An apparatus for enhancing an edge transition in a video signal comprising:
means for receiving a video signal including an edge transition; means for generating a correction signal for the edge transition; means for applying the correction signal to the video signal to produce a corrected signal; means for measuring the maximum and minimum amplitude of pixels in a predetermined pattern adjacent to the edge transition; means for calculating extended maximum and minimum amplitude limits in dependence on the measured maximum and minimum amplitudes; and, means for restricting the amplitude of the corrected signal within the extended maximum and minimum amplitude limits. 9. An apparatus for enhancing an edge transition in a video signal according to claim 8 wherein the calculated extended limits are proportional to the difference between the measured maximum and minimum amplitudes of the predetermined pattern of pixels adjacent to the edge transition. 10. An apparatus for enhancing an edge transition in a video signal according to claim 8 further comprising a means for compressing the extended maximum and/or minimum amplitude limits in dependence on the position of the edge feature within a permitted signal range. 11. An apparatus for enhancing an edge transition in a video signal according to claim 10 wherein the extended maximum and minimum amplitude limits are compressed using a ramp function. 12. An apparatus for enhancing an edge transition in a video signal according to claim 10 wherein the extended maximum and minimum amplitude limits are compressed using a spline function. 13. An apparatus for enhancing an edge transition in a video signal according to claim 8 wherein the correction signal is a second differential of the original signal. 14. An apparatus for enhancing an edge transition in a video signal according to claim 8 wherein the predetermined pattern of pixels is a symmetric measurement window about the edge transition. 15. A method for enhancing an edge transition in a video signal according to claim 1, additionally comprising the step of storing a plurality of scan lines of pixel data in a buffer. 16. An apparatus for enhancing an edge transition in a video signal according to claim 8, additionally comprising a buffer for storing a plurality of scan lines of pixel data. | A method for enhancing an edge transition in a video signal comprising the steps of receiving a video signal including an edge transition, generating a correction signal for the edge transition, applying the correction signal to the video signal to produce a corrected signal and restricting the amplitude of the corrected signal to extend between extended maximum and minimum amplitude limits in dependence on the measured maximum and minimum amplitudes of a predefined pattern of pixels adjacent to the edge transition.1. A method for enhancing an edge transition in a video signal comprising the steps of:
receiving a video signal including an edge transition; generating a correction signal for the edge transition; applying the correction signal to the video signal to produce a corrected signal; measuring maximum and minimum amplitudes of a predefined pattern of pixels adjacent to the edge transition; and restricting the amplitude of the corrected signal to extend between extended maximum and minimum amplitude limits in dependence on the measured maximum and minimum amplitudes of the predefined pattern of pixels. 2. A method for enhancing an edge transition in a video signal according to claim 1 wherein the calculated extended limits are proportional to the difference between the measured maximum and minimum amplitudes of the predetermined pattern of pixels adjacent to the edge transition. 3. A method for enhancing an edge transition in a video signal according to claim 1 comprising the further step of compressing the extended maximum and/or minimum amplitude limits in dependence on the position of the edge transition within a permitted signal range. 4. A method for enhancing an edge transition in a video signal according to claim 3 wherein the extended maximum and minimum amplitude limits are compressed using a ramp function. 5. A method for enhancing an edge transition in a video signal according to claim 3 wherein the extended maximum and minimum amplitude limits are compressed using a spline function. 6. A method for enhancing an edge transition in a video signal according to claim 1 wherein the correction signal is a second differential of the original signal. 7. A method for enhancing an edge transition in a video signal according to claim 1 wherein the predetermined pattern of pixels is a symmetric measurement window about the edge transition. 8. An apparatus for enhancing an edge transition in a video signal comprising:
means for receiving a video signal including an edge transition; means for generating a correction signal for the edge transition; means for applying the correction signal to the video signal to produce a corrected signal; means for measuring the maximum and minimum amplitude of pixels in a predetermined pattern adjacent to the edge transition; means for calculating extended maximum and minimum amplitude limits in dependence on the measured maximum and minimum amplitudes; and, means for restricting the amplitude of the corrected signal within the extended maximum and minimum amplitude limits. 9. An apparatus for enhancing an edge transition in a video signal according to claim 8 wherein the calculated extended limits are proportional to the difference between the measured maximum and minimum amplitudes of the predetermined pattern of pixels adjacent to the edge transition. 10. An apparatus for enhancing an edge transition in a video signal according to claim 8 further comprising a means for compressing the extended maximum and/or minimum amplitude limits in dependence on the position of the edge feature within a permitted signal range. 11. An apparatus for enhancing an edge transition in a video signal according to claim 10 wherein the extended maximum and minimum amplitude limits are compressed using a ramp function. 12. An apparatus for enhancing an edge transition in a video signal according to claim 10 wherein the extended maximum and minimum amplitude limits are compressed using a spline function. 13. An apparatus for enhancing an edge transition in a video signal according to claim 8 wherein the correction signal is a second differential of the original signal. 14. An apparatus for enhancing an edge transition in a video signal according to claim 8 wherein the predetermined pattern of pixels is a symmetric measurement window about the edge transition. 15. A method for enhancing an edge transition in a video signal according to claim 1, additionally comprising the step of storing a plurality of scan lines of pixel data in a buffer. 16. An apparatus for enhancing an edge transition in a video signal according to claim 8, additionally comprising a buffer for storing a plurality of scan lines of pixel data. | 2,600 |
9,728 | 9,728 | 14,807,321 | 2,663 | A wearable apparatus and method are provided for selectively disregarding triggers originating from persons other than a user of the wearable apparatus. The wearable apparatus comprises a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to receive the captured image data and identify in the image data a trigger. The trigger is associated with at least one action to be performed by the wearable apparatus. The processing device is also programmed to determine, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus, and forgo performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user. | 1. A wearable apparatus for selectively disregarding triggers originating from persons other than a user of the wearable apparatus, the wearable apparatus comprising:
a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus; and at least one processing device programmed to:
receive the captured image data;
identify in the image data a trigger, wherein the trigger is associated with at least one action to be performed by the wearable apparatus;
determine, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus; and
forgo performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user of the wearable apparatus. 2. The wearable apparatus of claim 1, wherein the trigger is a hand-related trigger identified by at least a portion of a hand, and the at least one processing device is further programmed to determine, as part of determining whether the trigger identified in the image data is associated with a person other than the user, whether the portion of the hand belongs to a person other than the user of the wearable apparatus. 3. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes determining whether a size of the hand portion in a field of view of the image sensor meets or exceeds a threshold. 4. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes determining whether the hand is associated with a body in a field of view of the image sensor. 5. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes comparing the portion of the hand with stored images of a hand of the user of the wearable apparatus. 6. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes determining an orientation of the portion of the hand. 7. The wearable apparatus of claim 1, wherein the trigger is a hand-related trigger identified by at least a portion of a hand moving in an erratic motion. 8. The wearable apparatus of claim 1, wherein the trigger is a hand-related trigger identified by at least a portion of a hand configured with an index finger pointing toward an object. 9. The wearable apparatus of claim 1, wherein determining whether the trigger identified in the image data is associated with a person other than the user of wearable apparatus is based on a confidence score determined from one or more analyses of the image data including the trigger. 10. The wearable apparatus of claim 1, wherein determining whether the trigger identified in the image data is associated with a person other than the user of wearable apparatus includes determining whether one or more persons are identified in a field of view of the image sensor. 11. The wearable apparatus of claim 1, wherein the at least one action to be performed by the wearable apparatus includes at least one of: announcing an identity of an inanimate object, announcing an identity of an individual, scene identification, summing money, monitoring a status of a traffic light, saving an individual's name, audibly reading text, audibly reading text summary, monitoring an objected expected to change, identifying a bus number, identifying currency, identifying a credit card, and identifying a pill. 12. The wearable apparatus of claim 1, wherein the wearable image sensor is configured to capture real-time image data and the at least one action associated with the trigger is performed in real-time. 13. A wearable apparatus for selectively disregarding hand-related triggers originating from persons other than a user of the wearable apparatus, the wearable comprising:
a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus; and at least one processing device programmed to:
identify in the image data a hand-related trigger, wherein the hand-related trigger is associated with at least one action to be performed by the wearable apparatus;
determine, based on the image data, whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus; and
forgo performance of the at least one action if the hand-related trigger identified in the image data is determined to be associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus. 14. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes determining whether a size of the hand portion in a field of view of the image sensor meets or exceeds a threshold. 15. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes determining whether the hand is associated with a body in a field of view of the image sensor. 16. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes comparing the portion of the hand with stored images of a hand of the user of the wearable apparatus. 17. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes determining an orientation of the portion of the hand. 18. The wearable apparatus of claim 13, wherein the hand-related trigger is identified by at least a portion of a hand moving in an erratic motion. 19. The wearable apparatus of claim 13, wherein the hand-related trigger is identified by at least a portion of a hand configured with an index finger pointing toward an object. 20. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus is based on a confidence score determined from one or more analyses of the image data including the hand-related trigger. 21. A method for selectively disregarding triggers originating from persons other than a user of a wearable apparatus, the method comprising:
capturing, via a wearable image sensor of the wearable apparatus, image data from an environment of the user of the wearable apparatus; identifying in the image data a trigger, wherein the trigger is associated with at least one action to be performed by the wearable apparatus; determining, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus; and forgoing performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user of the wearable apparatus. 22. A software product stored on a non-transitory computer readable medium and comprising data and computer implementable instructions for carrying out the method of claim 21. | A wearable apparatus and method are provided for selectively disregarding triggers originating from persons other than a user of the wearable apparatus. The wearable apparatus comprises a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus. The wearable apparatus also includes at least one processing device programmed to receive the captured image data and identify in the image data a trigger. The trigger is associated with at least one action to be performed by the wearable apparatus. The processing device is also programmed to determine, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus, and forgo performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user.1. A wearable apparatus for selectively disregarding triggers originating from persons other than a user of the wearable apparatus, the wearable apparatus comprising:
a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus; and at least one processing device programmed to:
receive the captured image data;
identify in the image data a trigger, wherein the trigger is associated with at least one action to be performed by the wearable apparatus;
determine, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus; and
forgo performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user of the wearable apparatus. 2. The wearable apparatus of claim 1, wherein the trigger is a hand-related trigger identified by at least a portion of a hand, and the at least one processing device is further programmed to determine, as part of determining whether the trigger identified in the image data is associated with a person other than the user, whether the portion of the hand belongs to a person other than the user of the wearable apparatus. 3. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes determining whether a size of the hand portion in a field of view of the image sensor meets or exceeds a threshold. 4. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes determining whether the hand is associated with a body in a field of view of the image sensor. 5. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes comparing the portion of the hand with stored images of a hand of the user of the wearable apparatus. 6. The wearable apparatus of claim 2, wherein determining that the portion of the hand belongs to the person other than the user includes determining an orientation of the portion of the hand. 7. The wearable apparatus of claim 1, wherein the trigger is a hand-related trigger identified by at least a portion of a hand moving in an erratic motion. 8. The wearable apparatus of claim 1, wherein the trigger is a hand-related trigger identified by at least a portion of a hand configured with an index finger pointing toward an object. 9. The wearable apparatus of claim 1, wherein determining whether the trigger identified in the image data is associated with a person other than the user of wearable apparatus is based on a confidence score determined from one or more analyses of the image data including the trigger. 10. The wearable apparatus of claim 1, wherein determining whether the trigger identified in the image data is associated with a person other than the user of wearable apparatus includes determining whether one or more persons are identified in a field of view of the image sensor. 11. The wearable apparatus of claim 1, wherein the at least one action to be performed by the wearable apparatus includes at least one of: announcing an identity of an inanimate object, announcing an identity of an individual, scene identification, summing money, monitoring a status of a traffic light, saving an individual's name, audibly reading text, audibly reading text summary, monitoring an objected expected to change, identifying a bus number, identifying currency, identifying a credit card, and identifying a pill. 12. The wearable apparatus of claim 1, wherein the wearable image sensor is configured to capture real-time image data and the at least one action associated with the trigger is performed in real-time. 13. A wearable apparatus for selectively disregarding hand-related triggers originating from persons other than a user of the wearable apparatus, the wearable comprising:
a wearable image sensor configured to capture image data from an environment of the user of the wearable apparatus; and at least one processing device programmed to:
identify in the image data a hand-related trigger, wherein the hand-related trigger is associated with at least one action to be performed by the wearable apparatus;
determine, based on the image data, whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus; and
forgo performance of the at least one action if the hand-related trigger identified in the image data is determined to be associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus. 14. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes determining whether a size of the hand portion in a field of view of the image sensor meets or exceeds a threshold. 15. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes determining whether the hand is associated with a body in a field of view of the image sensor. 16. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes comparing the portion of the hand with stored images of a hand of the user of the wearable apparatus. 17. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus includes determining an orientation of the portion of the hand. 18. The wearable apparatus of claim 13, wherein the hand-related trigger is identified by at least a portion of a hand moving in an erratic motion. 19. The wearable apparatus of claim 13, wherein the hand-related trigger is identified by at least a portion of a hand configured with an index finger pointing toward an object. 20. The wearable apparatus of claim 13, wherein determining whether the hand-related trigger identified in the image data is associated with at least a portion of a hand belonging to a person other than the user of the wearable apparatus is based on a confidence score determined from one or more analyses of the image data including the hand-related trigger. 21. A method for selectively disregarding triggers originating from persons other than a user of a wearable apparatus, the method comprising:
capturing, via a wearable image sensor of the wearable apparatus, image data from an environment of the user of the wearable apparatus; identifying in the image data a trigger, wherein the trigger is associated with at least one action to be performed by the wearable apparatus; determining, based on the image data, whether the trigger identified in the image data is associated with a person other than the user of the wearable apparatus; and forgoing performance of the at least one action if the trigger identified in the image data is determined to be associated with a person other than the user of the wearable apparatus. 22. A software product stored on a non-transitory computer readable medium and comprising data and computer implementable instructions for carrying out the method of claim 21. | 2,600 |
9,729 | 9,729 | 14,243,591 | 2,659 | A system and method can detect and/or prevent profane/objectionable content in forums/communities with community based or user generated content. The system generates and provides a disallowed variants dictionary, which can be constructed based on a misuse table, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content. Furthermore, the system checks each word in an incoming message against the disallowed variants dictionary, and determines that one or more words in the incoming message are disallowed when there is a hit. | 1. A method for detecting profane/objectionable content, comprising:
generating a disallowed variants dictionary, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content; checking one or more words in an incoming message against the disallowed variants dictionary; and determining that said one or more words in the incoming message are disallowed when there is a hit. 2. The method according to claim 1, further comprising:
performing at least one of the following steps:
removing one or more stop words in the incoming message,
removing any special symbols in the incoming message, and
catching one or more words with capitalization at odd locations. 3. The method according to claim 1, further comprising:
calculating an amount of profanity in the incoming message. 4. The method according to claim 1, further comprising:
checking each word in the incoming message against a disallowed words dictionary, and determining said word is disallowed if there is a hit. 5. The method according to claim 1, further comprising:
checking each word in the incoming message against an allowed words dictionary, and determining said word is allowed if there is a hit. 6. The method according to claim 1, further comprising:
performing a rolling of repeated character for said one or more words. 7. The method according to claim 1, further comprising:
using a look-left algorithm to construct the disallowed variants dictionary based on a misuse table and a disallowed words dictionary. 8. The method according to claim 7, further comprising:
containing in the disallowed variants dictionary one or more of morphologically and phonetically similar variants of said one or more disallowed words, and any misspellings and/or their variations of said one or more disallowed words. 9. The method according to claim 7, further comprising:
using a consonant chart to obtain one or more phonetically similar variants of said one or more disallowed words. 10. The method according to claim 7, further comprising:
containing in the disallowed variants dictionary one or more variants in different languages. 11. A system for detecting profane/objectionable content, comprising:
one or more microprocessors, one or more servers running on the one or more microprocessors, wherein the one or more servers operate to
generate a disallowed variants dictionary, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content;
check one or more words in an incoming message against the disallowed variants dictionary; and
determine that said one or more words in the incoming message are disallowed when there is a hit. 12. The system according to claim 11, wherein:
the one or more servers operate to perform at least one of the following steps:
removing one or more stop words in the incoming message,
removing any special symbols in the incoming message, and
catching one or more words with capitalization at odd locations. 13. The system according to claim 11, further comprising:
the one or more servers operate to calculate an amount of profanity in the incoming message. 14. The system according to claim 11, further comprising:
the one or more servers operate to
check each word in the incoming message against a disallowed words dictionary, and
determine said word is disallowed if there is a hit. 15. The system according to claim 11, further comprising:
the one or more servers operate to
check each word in the incoming message against an allowed words dictionary, and
determine said word is allowed if there is a hit. 16. The system according to claim 11, further comprising:
the one or more servers operate to perform a rolling of repeated character for said one or more words. 17. The system according to claim 11, further comprising:
the one or more servers operate to use a look-left algorithm to construct the disallowed variants dictionary based on a misuse table and a disallowed words dictionary. 18. The system according to claim 17, further comprising:
the disallowed variants dictionary contains one or more of morphologically and phonetically similar variants of said one or more disallowed words, and any misspellings and/or their variations of said one or more disallowed words. 19. The system according to claim 17, further comprising:
the one or more servers operate to use a consonant chart to obtain one or more phonetically similar variants of said one or more disallowed words, and the disallowed variants dictionary contains one or more variants in different languages. 20. A non-transitory machine readable storage medium having instructions stored thereon that when executed cause a system to perform the steps comprising:
generating a disallowed variants dictionary, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content; checking one or more words in an incoming message against the disallowed variants dictionary; and determining that said one or more words in the incoming message are disallowed when there is a hit. | A system and method can detect and/or prevent profane/objectionable content in forums/communities with community based or user generated content. The system generates and provides a disallowed variants dictionary, which can be constructed based on a misuse table, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content. Furthermore, the system checks each word in an incoming message against the disallowed variants dictionary, and determines that one or more words in the incoming message are disallowed when there is a hit.1. A method for detecting profane/objectionable content, comprising:
generating a disallowed variants dictionary, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content; checking one or more words in an incoming message against the disallowed variants dictionary; and determining that said one or more words in the incoming message are disallowed when there is a hit. 2. The method according to claim 1, further comprising:
performing at least one of the following steps:
removing one or more stop words in the incoming message,
removing any special symbols in the incoming message, and
catching one or more words with capitalization at odd locations. 3. The method according to claim 1, further comprising:
calculating an amount of profanity in the incoming message. 4. The method according to claim 1, further comprising:
checking each word in the incoming message against a disallowed words dictionary, and determining said word is disallowed if there is a hit. 5. The method according to claim 1, further comprising:
checking each word in the incoming message against an allowed words dictionary, and determining said word is allowed if there is a hit. 6. The method according to claim 1, further comprising:
performing a rolling of repeated character for said one or more words. 7. The method according to claim 1, further comprising:
using a look-left algorithm to construct the disallowed variants dictionary based on a misuse table and a disallowed words dictionary. 8. The method according to claim 7, further comprising:
containing in the disallowed variants dictionary one or more of morphologically and phonetically similar variants of said one or more disallowed words, and any misspellings and/or their variations of said one or more disallowed words. 9. The method according to claim 7, further comprising:
using a consonant chart to obtain one or more phonetically similar variants of said one or more disallowed words. 10. The method according to claim 7, further comprising:
containing in the disallowed variants dictionary one or more variants in different languages. 11. A system for detecting profane/objectionable content, comprising:
one or more microprocessors, one or more servers running on the one or more microprocessors, wherein the one or more servers operate to
generate a disallowed variants dictionary, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content;
check one or more words in an incoming message against the disallowed variants dictionary; and
determine that said one or more words in the incoming message are disallowed when there is a hit. 12. The system according to claim 11, wherein:
the one or more servers operate to perform at least one of the following steps:
removing one or more stop words in the incoming message,
removing any special symbols in the incoming message, and
catching one or more words with capitalization at odd locations. 13. The system according to claim 11, further comprising:
the one or more servers operate to calculate an amount of profanity in the incoming message. 14. The system according to claim 11, further comprising:
the one or more servers operate to
check each word in the incoming message against a disallowed words dictionary, and
determine said word is disallowed if there is a hit. 15. The system according to claim 11, further comprising:
the one or more servers operate to
check each word in the incoming message against an allowed words dictionary, and
determine said word is allowed if there is a hit. 16. The system according to claim 11, further comprising:
the one or more servers operate to perform a rolling of repeated character for said one or more words. 17. The system according to claim 11, further comprising:
the one or more servers operate to use a look-left algorithm to construct the disallowed variants dictionary based on a misuse table and a disallowed words dictionary. 18. The system according to claim 17, further comprising:
the disallowed variants dictionary contains one or more of morphologically and phonetically similar variants of said one or more disallowed words, and any misspellings and/or their variations of said one or more disallowed words. 19. The system according to claim 17, further comprising:
the one or more servers operate to use a consonant chart to obtain one or more phonetically similar variants of said one or more disallowed words, and the disallowed variants dictionary contains one or more variants in different languages. 20. A non-transitory machine readable storage medium having instructions stored thereon that when executed cause a system to perform the steps comprising:
generating a disallowed variants dictionary, wherein the disallowed variants dictionary contains a plurality of variants of one or more disallowed words in a community based or user generated content; checking one or more words in an incoming message against the disallowed variants dictionary; and determining that said one or more words in the incoming message are disallowed when there is a hit. | 2,600 |
9,730 | 9,730 | 13,863,805 | 2,692 | An information processing including a display; a first sensor configured to detect a first object that comes into contact with or approaches the display based on a change in a magnetic field; and a second sensor configured to detect a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. | 1. An information processing device comprising:
a display; a first sensor configured to detect a first object that comes into contact with or approaches the display based on a change in a magnetic field; and a second sensor configured to detect a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. 2. The information processing device of claim 1, further comprising:
circuitry configured to determine a first coordinate position of the first object based on an output of the first sensor, and determine a second coordinate position of the second object based on an output of the second sensor. 3. The information processing device of claim 2, wherein
the circuitry is further configured to execute a predetermined function based on the detected first and second coordinate positions. 4. The information processing device of claim 1, further comprising:
circuitry configured to determine that the second object corresponds to a palm of a user's hand based on an output of the second sensor, and ignore the detection when the second object is determined to correspond to the palm of the user's hand. 5. The information processing device of claim 1, wherein
the second sensor includes a first detecting area and a second detecting area. 6. The information processing device of claim 5, further comprising:
a user interface configured to receive an input indicating a user's dominant hand; and circuitry configured to set an arrangement of the first detecting area and the second detecting area based on the received input. 7. The information processing device of claim 5, wherein
the circuitry is configured to detect a gesture input by a user when an output of the second sensor indicates that the second object is detected in the first detecting area, and execute a predetermined function based on the detected gesture. 8. The information processing device of claim 7, wherein
the circuitry is configured to detect the gesture input when the output of the second sensor indicates that a plurality of the second objects are simultaneously detected in the first detecting area. 9. The information processing device of claim 7, wherein
the circuitry is configured to detect the gesture input when the output of the second sensor indicates that a single second object is detected in the first detecting area and the single second object moves within the first detecting area. 10. The information processing device of claim 5, wherein
the circuitry is configured to determine that the second object detected in the first detecting area corresponds to a palm of a user's hand based on an output from the second sensor, and ignore the detection when the second object is determined to correspond to the palm of the user's hand. 11. The information processing device of claim 10, wherein
the circuitry is configured to determine that the second object corresponds to the palm of the user's hand when the output of the second sensor indicates that a single touch input is detected in the first detecting area. 12. The information processing device of claim 4, wherein
the circuitry is configured to detect a single touch input by a user when an output of the second sensor indicates that the second object is detected in the second detecting area, and execute a predetermined function based on the detected single touch input. 13. The information processing device of claim 4, wherein
the circuitry is configured to determine that the second object detected in the second detecting area corresponds to a palm of a user's hand based on an output from the second sensor, and accept the detection when the second object is determined to correspond to the palm of the user's hand. 14. A method performed by an information processing device, the method comprising:
detecting, at a first sensor, a first object that comes into contact with or approaches a display of the information processing device based on a change in a magnetic field; and detecting, at a second sensor, a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. 15. A non-transitory computer-readable medium including computer program instructions, which when executed by an information processing device, cause the information processing device to:
detect, at a first sensor, a first object that comes into contact with or approaches a display of the information processing device based on a change in a magnetic field; and detect, at a second sensor, a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. | An information processing including a display; a first sensor configured to detect a first object that comes into contact with or approaches the display based on a change in a magnetic field; and a second sensor configured to detect a second object that comes into contact with or approaches the display based on a change in capacitance or resistance.1. An information processing device comprising:
a display; a first sensor configured to detect a first object that comes into contact with or approaches the display based on a change in a magnetic field; and a second sensor configured to detect a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. 2. The information processing device of claim 1, further comprising:
circuitry configured to determine a first coordinate position of the first object based on an output of the first sensor, and determine a second coordinate position of the second object based on an output of the second sensor. 3. The information processing device of claim 2, wherein
the circuitry is further configured to execute a predetermined function based on the detected first and second coordinate positions. 4. The information processing device of claim 1, further comprising:
circuitry configured to determine that the second object corresponds to a palm of a user's hand based on an output of the second sensor, and ignore the detection when the second object is determined to correspond to the palm of the user's hand. 5. The information processing device of claim 1, wherein
the second sensor includes a first detecting area and a second detecting area. 6. The information processing device of claim 5, further comprising:
a user interface configured to receive an input indicating a user's dominant hand; and circuitry configured to set an arrangement of the first detecting area and the second detecting area based on the received input. 7. The information processing device of claim 5, wherein
the circuitry is configured to detect a gesture input by a user when an output of the second sensor indicates that the second object is detected in the first detecting area, and execute a predetermined function based on the detected gesture. 8. The information processing device of claim 7, wherein
the circuitry is configured to detect the gesture input when the output of the second sensor indicates that a plurality of the second objects are simultaneously detected in the first detecting area. 9. The information processing device of claim 7, wherein
the circuitry is configured to detect the gesture input when the output of the second sensor indicates that a single second object is detected in the first detecting area and the single second object moves within the first detecting area. 10. The information processing device of claim 5, wherein
the circuitry is configured to determine that the second object detected in the first detecting area corresponds to a palm of a user's hand based on an output from the second sensor, and ignore the detection when the second object is determined to correspond to the palm of the user's hand. 11. The information processing device of claim 10, wherein
the circuitry is configured to determine that the second object corresponds to the palm of the user's hand when the output of the second sensor indicates that a single touch input is detected in the first detecting area. 12. The information processing device of claim 4, wherein
the circuitry is configured to detect a single touch input by a user when an output of the second sensor indicates that the second object is detected in the second detecting area, and execute a predetermined function based on the detected single touch input. 13. The information processing device of claim 4, wherein
the circuitry is configured to determine that the second object detected in the second detecting area corresponds to a palm of a user's hand based on an output from the second sensor, and accept the detection when the second object is determined to correspond to the palm of the user's hand. 14. A method performed by an information processing device, the method comprising:
detecting, at a first sensor, a first object that comes into contact with or approaches a display of the information processing device based on a change in a magnetic field; and detecting, at a second sensor, a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. 15. A non-transitory computer-readable medium including computer program instructions, which when executed by an information processing device, cause the information processing device to:
detect, at a first sensor, a first object that comes into contact with or approaches a display of the information processing device based on a change in a magnetic field; and detect, at a second sensor, a second object that comes into contact with or approaches the display based on a change in capacitance or resistance. | 2,600 |
9,731 | 9,731 | 14,870,458 | 2,653 | An echo cancellation detector for controlling an acoustic echo canceller that is configured to cancel an echo of a far-end signal in a near-end signal in a telephony system, the echo cancellation detector comprising a comparison generator configured to compare the far-end signal with the near-end signal, a decision unit configured to make a determination about a first acoustic echo canceller based on that comparison and a controller configured to control an operation of a second acoustic echo canceller in dependence on the determination. | 1. An echo cancellation detector for controlling an acoustic echo canceller that is configured to cancel an echo of a far-end signal in a near-end signal in a telephony system, the echo cancellation detector comprising:
a comparison generator configured to compare the far-end signal with the near-end signal; a decision unit configured to make a determination about a first acoustic echo canceller based on a result of comparison by the comparison generator; and a controller configured to control an operation of a second acoustic echo canceller in dependence on the determination by the decision unit. 2. An echo cancellation detector as claimed in claim 1, wherein the decision unit is further configured to make a determination as to whether the first acoustic echo canceller is present in said telephony system or not. 3. An echo cancellation detector as claimed in claim 2, wherein the controller is further configured to:
control the second acoustic echo canceller to be in a state in which it is not operating in response to a determination that the first acoustic echo canceller is present; and control the second acoustic echo canceller to be in a state in which it is operating in response to a determination that the first acoustic echo canceller is not present. 4. An echo cancellation detector as claimed in claim 2, wherein the controller comprises:
a monitoring unit configured to monitor whether the first acoustic echo canceller is successfully removing far-end echo from a microphone signal in order to provide the near-end signal; the controller being further configured to control the second acoustic echo canceller to be in a state in which it is operating to remove far-end echo from the near-end signal in response to a determination that the first acoustic echo canceller is not successfully removing far-end echo from the microphone signal. 5. An echo cancellation detector as claimed in claim 1, wherein the comparison generator is further configured to compare an indication of the frequency spectrum of the far-end signal with an indication of the frequency spectrum of the near-end signal. 6. An echo cancellation detector as claimed in claim 1, wherein the comparison generator is further configured to compare a binary representation of the frequency spectrum of the far-end signal with a binary representation of the frequency spectrum of the near-end signal. 7. An echo cancellation detector as claimed in claim 6, wherein the comparison generator comprises a frequency spectra generator configured to form a binary representation of a frequency spectrum by:
representing a frequency bin in the frequency spectrum with a magnitude above a predetermined threshold as a first predetermined value in the binary representation; and representing a frequency bin with a magnitude below the predetermined threshold as a second predetermined value in the binary representation. 8. An echo cancellation detector as claimed in claim 7, wherein the frequency spectra generator is configured to form the binary representation of the frequency spectrum to represent selected frequency bins only. 9. An echo cancellation detector as claimed in claim 8, wherein the frequency spectra generator is further configured to select the frequency bins to correspond to frequencies found in human speech. 10. An echo cancellation detector as claimed in claim 7, wherein the comparison generator is further configured to compare the far-end signal with the near-end signal by counting the number of corresponding frequency bins for which the binary representations of the far-end and near-end signals either both have the first predetermined value or both have the second predetermined value. 11. An echo cancellation detector as claimed in claim 10, wherein the comparison generator is further configured to:
compare a binary representation of the near-end signal for a current frame with binary representations of the far-end signal for multiple previous frames; and add one unit to the count if a binary representation of the far-end signal for any of those previous frames comprises the first or second predetermined value for a frequency bin that corresponds to a frequency bin in which the binary representation of the near-end signal for the current frame has the same respective first or second predetermined value. 12. An echo cancellation detector as claimed in claim 11, wherein the comparison generator is further configured to average the count with one or more counts generated by comparing preceding frames of the near-end signal and the far-end signal. 13. An echo cancellation detector as claimed in claim 12, wherein the controller is further configured to:
control the second acoustic canceller to be in a state in which it is not operating if the averaged count is below a predetermined threshold; and control the second acoustic canceller to be in a state in which it is operating if the averaged count is below a predetermined threshold. 14. An echo cancellation detector as claimed in claim 1, wherein the echo cancellation detector is further configured to confirm the presence of far-end voice before comparing the far-end signal with the near-end signal. 15. A method for cancelling an echo of a far-end signal in a near-end signal in a telephony system, the method comprising:
comparing the far-end signal with the near-end signal; making a determination about a first acoustic echo canceller based on the comparison; and operating a second acoustic echo canceller in dependence on the determination. 16. A comparison generator for determining the similarity between a first signal and a second signal, the comparison generator comprising a frequency spectra generator configured to:
obtain a frequency spectrum of both signals; and for each frequency spectrum, form a binary representation of that spectrum by representing a frequency bin having a magnitude above a predetermined threshold in the frequency spectrum with a first predetermined value and a frequency bin having a magnitude below the predetermined threshold in the frequency spectrum as a second predetermined value; the comparison generator being further configured to: compare the binary representations of the first and second signals; and count the number of corresponding frequency bins for which the binary representations of the first and second signals either both have the first predetermined value or both have the second predetermined value. 17. The comparison generator as claimed in claim 16, further configured to:
compare a binary representation of the first signal for a current frame with binary representations of the second signal for multiple previous frames; and add one unit to the count if a binary representation of the second signal for any of those previous frames comprises the first or second predetermined value for a frequency bin that corresponds to a frequency bin in which the binary representation of the first signal for the current frame has the same respective first or second predetermined value. 18. The comparison generator as claimed in claim 16, further configured to average the count with one or more counts generated by comparing preceding frames of the first and second signals. 19. An echo cancellation detector comprising a comparison generator as claimed in claim 16, the echo cancellation detector further comprising a decision unit configured to make a determination about whether a first acoustic echo canceller is present in the telephony system or not a first acoustic echo canceller in dependence on the determined similarity between the near end and far end signals. 20. An echo cancellation detector as claimed in claim 19, the controller being further configured to:
cause the second acoustic echo canceller to be in a state in which it is not operating if the averaged count is below a predetermined threshold; and cause the second acoustic echo canceller to be in a state in which it is operating if the averaged count is above the predetermined threshold. | An echo cancellation detector for controlling an acoustic echo canceller that is configured to cancel an echo of a far-end signal in a near-end signal in a telephony system, the echo cancellation detector comprising a comparison generator configured to compare the far-end signal with the near-end signal, a decision unit configured to make a determination about a first acoustic echo canceller based on that comparison and a controller configured to control an operation of a second acoustic echo canceller in dependence on the determination.1. An echo cancellation detector for controlling an acoustic echo canceller that is configured to cancel an echo of a far-end signal in a near-end signal in a telephony system, the echo cancellation detector comprising:
a comparison generator configured to compare the far-end signal with the near-end signal; a decision unit configured to make a determination about a first acoustic echo canceller based on a result of comparison by the comparison generator; and a controller configured to control an operation of a second acoustic echo canceller in dependence on the determination by the decision unit. 2. An echo cancellation detector as claimed in claim 1, wherein the decision unit is further configured to make a determination as to whether the first acoustic echo canceller is present in said telephony system or not. 3. An echo cancellation detector as claimed in claim 2, wherein the controller is further configured to:
control the second acoustic echo canceller to be in a state in which it is not operating in response to a determination that the first acoustic echo canceller is present; and control the second acoustic echo canceller to be in a state in which it is operating in response to a determination that the first acoustic echo canceller is not present. 4. An echo cancellation detector as claimed in claim 2, wherein the controller comprises:
a monitoring unit configured to monitor whether the first acoustic echo canceller is successfully removing far-end echo from a microphone signal in order to provide the near-end signal; the controller being further configured to control the second acoustic echo canceller to be in a state in which it is operating to remove far-end echo from the near-end signal in response to a determination that the first acoustic echo canceller is not successfully removing far-end echo from the microphone signal. 5. An echo cancellation detector as claimed in claim 1, wherein the comparison generator is further configured to compare an indication of the frequency spectrum of the far-end signal with an indication of the frequency spectrum of the near-end signal. 6. An echo cancellation detector as claimed in claim 1, wherein the comparison generator is further configured to compare a binary representation of the frequency spectrum of the far-end signal with a binary representation of the frequency spectrum of the near-end signal. 7. An echo cancellation detector as claimed in claim 6, wherein the comparison generator comprises a frequency spectra generator configured to form a binary representation of a frequency spectrum by:
representing a frequency bin in the frequency spectrum with a magnitude above a predetermined threshold as a first predetermined value in the binary representation; and representing a frequency bin with a magnitude below the predetermined threshold as a second predetermined value in the binary representation. 8. An echo cancellation detector as claimed in claim 7, wherein the frequency spectra generator is configured to form the binary representation of the frequency spectrum to represent selected frequency bins only. 9. An echo cancellation detector as claimed in claim 8, wherein the frequency spectra generator is further configured to select the frequency bins to correspond to frequencies found in human speech. 10. An echo cancellation detector as claimed in claim 7, wherein the comparison generator is further configured to compare the far-end signal with the near-end signal by counting the number of corresponding frequency bins for which the binary representations of the far-end and near-end signals either both have the first predetermined value or both have the second predetermined value. 11. An echo cancellation detector as claimed in claim 10, wherein the comparison generator is further configured to:
compare a binary representation of the near-end signal for a current frame with binary representations of the far-end signal for multiple previous frames; and add one unit to the count if a binary representation of the far-end signal for any of those previous frames comprises the first or second predetermined value for a frequency bin that corresponds to a frequency bin in which the binary representation of the near-end signal for the current frame has the same respective first or second predetermined value. 12. An echo cancellation detector as claimed in claim 11, wherein the comparison generator is further configured to average the count with one or more counts generated by comparing preceding frames of the near-end signal and the far-end signal. 13. An echo cancellation detector as claimed in claim 12, wherein the controller is further configured to:
control the second acoustic canceller to be in a state in which it is not operating if the averaged count is below a predetermined threshold; and control the second acoustic canceller to be in a state in which it is operating if the averaged count is below a predetermined threshold. 14. An echo cancellation detector as claimed in claim 1, wherein the echo cancellation detector is further configured to confirm the presence of far-end voice before comparing the far-end signal with the near-end signal. 15. A method for cancelling an echo of a far-end signal in a near-end signal in a telephony system, the method comprising:
comparing the far-end signal with the near-end signal; making a determination about a first acoustic echo canceller based on the comparison; and operating a second acoustic echo canceller in dependence on the determination. 16. A comparison generator for determining the similarity between a first signal and a second signal, the comparison generator comprising a frequency spectra generator configured to:
obtain a frequency spectrum of both signals; and for each frequency spectrum, form a binary representation of that spectrum by representing a frequency bin having a magnitude above a predetermined threshold in the frequency spectrum with a first predetermined value and a frequency bin having a magnitude below the predetermined threshold in the frequency spectrum as a second predetermined value; the comparison generator being further configured to: compare the binary representations of the first and second signals; and count the number of corresponding frequency bins for which the binary representations of the first and second signals either both have the first predetermined value or both have the second predetermined value. 17. The comparison generator as claimed in claim 16, further configured to:
compare a binary representation of the first signal for a current frame with binary representations of the second signal for multiple previous frames; and add one unit to the count if a binary representation of the second signal for any of those previous frames comprises the first or second predetermined value for a frequency bin that corresponds to a frequency bin in which the binary representation of the first signal for the current frame has the same respective first or second predetermined value. 18. The comparison generator as claimed in claim 16, further configured to average the count with one or more counts generated by comparing preceding frames of the first and second signals. 19. An echo cancellation detector comprising a comparison generator as claimed in claim 16, the echo cancellation detector further comprising a decision unit configured to make a determination about whether a first acoustic echo canceller is present in the telephony system or not a first acoustic echo canceller in dependence on the determined similarity between the near end and far end signals. 20. An echo cancellation detector as claimed in claim 19, the controller being further configured to:
cause the second acoustic echo canceller to be in a state in which it is not operating if the averaged count is below a predetermined threshold; and cause the second acoustic echo canceller to be in a state in which it is operating if the averaged count is above the predetermined threshold. | 2,600 |
9,732 | 9,732 | 13,583,148 | 2,647 | A method for protecting data contained in a security module of a telecommunication device equipped with a near field communication router, wherein a modification of a routing table between gates of the router is dependent on a verification of an authentication code keyed in by a user. | 1. A method for protecting data contained in a security module of a telecommunication device equipped with a near field communication router, wherein a modification of a routing table between gates of said router is dependent on a verification of an authentication code keyed in by a user. 2. The method of claim 1, wherein a phase of configuration of the router by the security module comprises, after verification of the authentication code, steps of generation of a password and of transmission of this password to the security module. 3. The method of claim 2, wherein any modification of the routing table is then submitted to a verification, by the router, of said password. 4. The method of claim 2, wherein the password is generated in non-deterministic fashion, and preferably randomly. 5. The method of claim 1, wherein a reference signature is calculated on each modification of the routing table and is stored in the security module. 6. The method of claim 5, wherein the reference signature is calculated by the security module. 7. The method of claim 5, wherein the signature of the routing table is verified by the security module on each initialization of the router. 8. The method of claim 5, wherein a provision of data by said security module on a gate of said router is preceded by a comparison of a current signature of the routing table with the reference signature. 9. A security module intended for a telecommunication device equipped with a near field communication router, comprising means capable of implementing the method of claim 1. 10. A telecommunication device equipped with a near field communication router, comprising means for implementing the method of claim 1. | A method for protecting data contained in a security module of a telecommunication device equipped with a near field communication router, wherein a modification of a routing table between gates of the router is dependent on a verification of an authentication code keyed in by a user.1. A method for protecting data contained in a security module of a telecommunication device equipped with a near field communication router, wherein a modification of a routing table between gates of said router is dependent on a verification of an authentication code keyed in by a user. 2. The method of claim 1, wherein a phase of configuration of the router by the security module comprises, after verification of the authentication code, steps of generation of a password and of transmission of this password to the security module. 3. The method of claim 2, wherein any modification of the routing table is then submitted to a verification, by the router, of said password. 4. The method of claim 2, wherein the password is generated in non-deterministic fashion, and preferably randomly. 5. The method of claim 1, wherein a reference signature is calculated on each modification of the routing table and is stored in the security module. 6. The method of claim 5, wherein the reference signature is calculated by the security module. 7. The method of claim 5, wherein the signature of the routing table is verified by the security module on each initialization of the router. 8. The method of claim 5, wherein a provision of data by said security module on a gate of said router is preceded by a comparison of a current signature of the routing table with the reference signature. 9. A security module intended for a telecommunication device equipped with a near field communication router, comprising means capable of implementing the method of claim 1. 10. A telecommunication device equipped with a near field communication router, comprising means for implementing the method of claim 1. | 2,600 |
9,733 | 9,733 | 15,365,774 | 2,648 | A network traffic associated with a communication request within a computing device can be identified. The device can comprise of a first and second communication stack which can addresses a first and a second network interface within the computing device. The first network interface can be associated with a mobile broadband network and the second network interface can be associated with a computing network. A first and second portion of the network traffic associated with the communication request can be programmatically determined to be conveyed to the first and second network interfaces. The first and second portions of network traffic can be conveyed simultaneously to the mobile broadband network associated with the first network interface and the computing network associated with the second network interface. | 1-25. (canceled) 26. A computer-implemented method within a computer hardware system implementing an application layer and a data link layer, comprising:
identifying, using a network fusion layer disposed between the application layer and the data link layer, a first portion of network traffic from the application layer; identifying, using the network fusion layer, a second portion of the network traffic from the application layer; routing, using the network fusion layer, the first portion to a first network interface associated with a computing network; and routing, using the network fusion layer, the second portion to a second network interface associated with a mobile phone network, wherein the identifying and the routing of the first and second portions occurs simultaneously. 27. The method of claim 26, wherein
the first network interface is dedicated to a first application in the application layer, and the second network interface is dedicated to a second application in the application layer. 28. The method of claim 26, wherein
the network fusion layer dynamically manages the network traffic based upon prioritization criteria. 29. The method of claim 26, wherein
the first and second network interface receive network traffic from a first application in the application layer, the first network interface is dedicated to a predefined first portion of the network traffic from the first application; the second network interface is dedicated to a predefined second portion of network traffic from the first application. 30. The method of claim 26, further comprising:
identifying a state change of one of the first and second network interfaces; and automatically deactivating, upon the state change being the one of the first and second network interfaces becoming unresponsive, the one of the first and second network interfaces. 31. The method of claim 26, further comprising:
receiving a communication request with a network access protocol destined to an interface having a different network access protocol; translating, using, the network fusion layer, the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. 32. The method of claim 26, wherein the network fusion layer includes:
a network interface manager configured to manage a plurality of the network interfaces associated with a plurality of network links; a data composer configured to assemble and disassemble a communication request associated with the network traffic; a session handler configured to establish a communication session between a source and at least one destination entity associated with the network traffic; a flow controller configured to moderate transmission speed of the network traffic transmitted over the plurality of network links; and a routing engine configured to convey at least a portion of the communication request to the source entity and the at least one destination entity utilizing the plurality of network links. 33. A computer hardware system implementing an application layer and a data link layer, comprising:
a hardware processor configured to initiate the following executable operations:
identifying, using a network fusion layer disposed between the application layer and the data link layer, a first portion of network traffic from the application layer;
identifying, using the network fusion layer, a second portion of the network traffic from the application layer;
routing, using the network fusion layer, the first portion to a first network interface associated with a computing network; and
routing, using the network fusion layer, the second portion to a second network interface associated with a mobile phone network, wherein
the identifying and the routing of the first and second portions occurs simultaneously. 34. The system of claim 33, wherein
the first network interface is dedicated to a first application in the application layer, and the second network interface is dedicated to a second application in the application layer. 35. The system of claim 33, wherein
the network fusion layer dynamically manages the network traffic based upon prioritization criteria. 36. The system of claim 33, wherein
the first and second network interface receive network traffic from a first application in the application layer, the first network interface is dedicated to a predefined first portion of the network traffic from the first application; the second network interface is dedicated to a predefined second portion of network traffic from the first application. 37. The system of claim 33, wherein the hardware processor configured to initiate the following further executable operations:
identifying a state change of one of the first and second network interfaces; and automatically deactivating, upon the state change being the one of the first and second network interfaces becoming unresponsive, the one of the first and second network interfaces. 38. The system of claim 33, wherein the hardware processor configured to initiate the following further executable operations:
receiving a communication request with a network access protocol destined to an interface having a different network access protocol; translating, using, the network fusion layer, the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. 39. The system of claim 33, wherein
the network fusion layer includes:
a network interface manager configured to manage a plurality of the network interfaces associated with a plurality of network links;
a data composer configured to assemble and disassemble a communication request associated with the network traffic;
a session handler configured to establish a communication session between a source and at least one destination entity associated with the network traffic;
a flow controller configured to moderate transmission speed of the network traffic transmitted over the plurality of network links; and
a routing engine configured to convey at least a portion of the communication request to the source entity and the at least one destination entity utilizing the plurality of network links. 40. A computer program product, comprising:
a hardware storage device having stored therein computer usable program code, the computer usable program code, which when executed by a computer hardware system implementing an application layer and a data link layer, causes the computer hardware system to perform:
identifying, using a network fusion layer disposed between the application layer and the data link layer, a first portion of network traffic from the application layer;
identifying, using the network fusion layer, a second portion of the network traffic from the application layer;
routing, using the network fusion layer, the first portion to a first network interface associated with a computing network; and
routing, using the network fusion layer, the second portion to a second network interface associated with a mobile phone network, wherein
the identifying and the routing of the first and second portions occurs simultaneously. 41. The computer program product of claim 40, wherein
the first network interface is dedicated to a first application in the application layer, and the second network interface is dedicated to a second application in the application layer. 42. The computer program product of claim 40, wherein
the network fusion layer dynamically manages the network traffic based upon prioritization criteria. 43. The computer program product of claim 40, wherein
the first and second network interface receive network traffic from a first application in the application layer, the first network interface is dedicated to a predefined first portion of the network traffic from the first application; the second network interface is dedicated to a predefined second portion of network traffic from the first application. 44. The computer program product of claim 40, wherein the computer usable program code further causes the computer hardware system to perform:
identifying a state change of one of the first and second network interfaces; and automatically deactivating, upon the state change being the one of the first and second network interfaces becoming unresponsive, the one of the first and second network interfaces. 45. The computer program product of claim 40, wherein the computer usable program code further causes the computer hardware system to perform:
receiving a communication request with a network access protocol destined to an interface having a different network access protocol; translating, using, the network fusion layer, the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. | A network traffic associated with a communication request within a computing device can be identified. The device can comprise of a first and second communication stack which can addresses a first and a second network interface within the computing device. The first network interface can be associated with a mobile broadband network and the second network interface can be associated with a computing network. A first and second portion of the network traffic associated with the communication request can be programmatically determined to be conveyed to the first and second network interfaces. The first and second portions of network traffic can be conveyed simultaneously to the mobile broadband network associated with the first network interface and the computing network associated with the second network interface.1-25. (canceled) 26. A computer-implemented method within a computer hardware system implementing an application layer and a data link layer, comprising:
identifying, using a network fusion layer disposed between the application layer and the data link layer, a first portion of network traffic from the application layer; identifying, using the network fusion layer, a second portion of the network traffic from the application layer; routing, using the network fusion layer, the first portion to a first network interface associated with a computing network; and routing, using the network fusion layer, the second portion to a second network interface associated with a mobile phone network, wherein the identifying and the routing of the first and second portions occurs simultaneously. 27. The method of claim 26, wherein
the first network interface is dedicated to a first application in the application layer, and the second network interface is dedicated to a second application in the application layer. 28. The method of claim 26, wherein
the network fusion layer dynamically manages the network traffic based upon prioritization criteria. 29. The method of claim 26, wherein
the first and second network interface receive network traffic from a first application in the application layer, the first network interface is dedicated to a predefined first portion of the network traffic from the first application; the second network interface is dedicated to a predefined second portion of network traffic from the first application. 30. The method of claim 26, further comprising:
identifying a state change of one of the first and second network interfaces; and automatically deactivating, upon the state change being the one of the first and second network interfaces becoming unresponsive, the one of the first and second network interfaces. 31. The method of claim 26, further comprising:
receiving a communication request with a network access protocol destined to an interface having a different network access protocol; translating, using, the network fusion layer, the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. 32. The method of claim 26, wherein the network fusion layer includes:
a network interface manager configured to manage a plurality of the network interfaces associated with a plurality of network links; a data composer configured to assemble and disassemble a communication request associated with the network traffic; a session handler configured to establish a communication session between a source and at least one destination entity associated with the network traffic; a flow controller configured to moderate transmission speed of the network traffic transmitted over the plurality of network links; and a routing engine configured to convey at least a portion of the communication request to the source entity and the at least one destination entity utilizing the plurality of network links. 33. A computer hardware system implementing an application layer and a data link layer, comprising:
a hardware processor configured to initiate the following executable operations:
identifying, using a network fusion layer disposed between the application layer and the data link layer, a first portion of network traffic from the application layer;
identifying, using the network fusion layer, a second portion of the network traffic from the application layer;
routing, using the network fusion layer, the first portion to a first network interface associated with a computing network; and
routing, using the network fusion layer, the second portion to a second network interface associated with a mobile phone network, wherein
the identifying and the routing of the first and second portions occurs simultaneously. 34. The system of claim 33, wherein
the first network interface is dedicated to a first application in the application layer, and the second network interface is dedicated to a second application in the application layer. 35. The system of claim 33, wherein
the network fusion layer dynamically manages the network traffic based upon prioritization criteria. 36. The system of claim 33, wherein
the first and second network interface receive network traffic from a first application in the application layer, the first network interface is dedicated to a predefined first portion of the network traffic from the first application; the second network interface is dedicated to a predefined second portion of network traffic from the first application. 37. The system of claim 33, wherein the hardware processor configured to initiate the following further executable operations:
identifying a state change of one of the first and second network interfaces; and automatically deactivating, upon the state change being the one of the first and second network interfaces becoming unresponsive, the one of the first and second network interfaces. 38. The system of claim 33, wherein the hardware processor configured to initiate the following further executable operations:
receiving a communication request with a network access protocol destined to an interface having a different network access protocol; translating, using, the network fusion layer, the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. 39. The system of claim 33, wherein
the network fusion layer includes:
a network interface manager configured to manage a plurality of the network interfaces associated with a plurality of network links;
a data composer configured to assemble and disassemble a communication request associated with the network traffic;
a session handler configured to establish a communication session between a source and at least one destination entity associated with the network traffic;
a flow controller configured to moderate transmission speed of the network traffic transmitted over the plurality of network links; and
a routing engine configured to convey at least a portion of the communication request to the source entity and the at least one destination entity utilizing the plurality of network links. 40. A computer program product, comprising:
a hardware storage device having stored therein computer usable program code, the computer usable program code, which when executed by a computer hardware system implementing an application layer and a data link layer, causes the computer hardware system to perform:
identifying, using a network fusion layer disposed between the application layer and the data link layer, a first portion of network traffic from the application layer;
identifying, using the network fusion layer, a second portion of the network traffic from the application layer;
routing, using the network fusion layer, the first portion to a first network interface associated with a computing network; and
routing, using the network fusion layer, the second portion to a second network interface associated with a mobile phone network, wherein
the identifying and the routing of the first and second portions occurs simultaneously. 41. The computer program product of claim 40, wherein
the first network interface is dedicated to a first application in the application layer, and the second network interface is dedicated to a second application in the application layer. 42. The computer program product of claim 40, wherein
the network fusion layer dynamically manages the network traffic based upon prioritization criteria. 43. The computer program product of claim 40, wherein
the first and second network interface receive network traffic from a first application in the application layer, the first network interface is dedicated to a predefined first portion of the network traffic from the first application; the second network interface is dedicated to a predefined second portion of network traffic from the first application. 44. The computer program product of claim 40, wherein the computer usable program code further causes the computer hardware system to perform:
identifying a state change of one of the first and second network interfaces; and automatically deactivating, upon the state change being the one of the first and second network interfaces becoming unresponsive, the one of the first and second network interfaces. 45. The computer program product of claim 40, wherein the computer usable program code further causes the computer hardware system to perform:
receiving a communication request with a network access protocol destined to an interface having a different network access protocol; translating, using, the network fusion layer, the communication request into the different network access protocol; and conveying the communication request over the interface via the different network access protocol. | 2,600 |
9,734 | 9,734 | 14,362,232 | 2,611 | Image processing apparatus 110 comprising a processor 120 for combining a time-series of three-dimensional [3D] images into a single 3D image using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images, an input 130 for obtaining a first and second time-series of 3D images 132 for generating, using the processor, a respective first and second 3D image 122 , and a renderer 140 for rendering, from a common viewpoint 154 , the first and the second 3D image 122 in an output image 162 for enabling comparative display of the change over time of the first and the second time-series of 3D images. | 1. Image processing apparatus comprising:
a processor for combining a time-series of three-dimensional [3D] images into a single 3D image, using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images; an input for obtaining a first and second time-series of 3D images for generating, using the processor, a respective first and second 3D image; and a renderer for rendering, from a common viewpoint, the first and the second 3D image in an output image for enabling comparative display of the change over time of the first and the second time-series of 3D images. 2. Image processing apparatus according to claim 1, wherein the processor is arranged for using a further encoding function, wherein the further encoding function differs from the encoding function for differently encoding said change over time in respective co-located voxels of the time-series of 3D images, and wherein the processor is arranged for:
generating, using the encoding function, a first intermediate 3D image from the first time-series of 3D images and a second intermediate 3D image from the second time-series of 3D images; generating, using the further encoding function, a third intermediate 3D image from the first time-series of 3D images and a fourth intermediate 3D image from the second time-series of 3D images; and generating the first and the second 3D image in dependence on the first intermediate 3D image, the second intermediate 3D image, the third intermediate 3D image and the fourth intermediate 3D image. 3. Image processing apparatus according to claim 2, wherein the processor is arranged for (i) generating the first 3D image as a difference between the first intermediate 3D image and the second intermediate 3D image, and (ii) generating the second 3D image as the difference between the third intermediate 3D image and the fourth intermediate 3D image. 4. Image processing apparatus according to claim 3, wherein the renderer is arranged for (i) using an image fusion process to combine the first and the second 3D image into a fused 3D image, and (ii) rendering the fused 3D image in the output image. 5. Image processing apparatus according to claim 4, wherein the image fusion process comprises (i) mapping voxel values of the first 3D image to at least one of the group of: a hue, a saturation, an opacity of the voxel values of the fused 3D image, and (ii) mapping the voxel values of the second 3D image to at least another one out of said group. 6. Image processing apparatus according to claim 3, wherein the processor is arranged for using a registration process for obtaining the first and the second 3D image as being mutually registered 3D images. 7. Image processing apparatus according to claim 6, wherein the processor is arranged for evaluating a result of the registration process for, instead of rendering the fused 3D image in the output image, rendering the first and the second 3D image in separate viewports in the output image for obtaining a side-by-side rendering of the first and the second 3D image if the registration process fails. 8. Image processing apparatus according to claim 2, wherein the processor is arranged for (i) generating the first 3D image as a combination of the first intermediate 3D image and the third intermediate 3D image, and (ii) generating the second 3D image as the combination of the second intermediate 3D image and the fourth intermediate 3D image. 9. Image processing apparatus according to claim 8, wherein the processor is arranged for using an image fusion process for said generating of the first 3D image and/or said generating of the second 3D image. 10. Image processing apparatus according to claim 8, wherein the renderer is arranged for (i) rendering the first 3D image in a first viewport in the output image, and (ii) rendering the second 3D image in a second viewport in the output image, for obtaining a side-by-side rendering of the first and the second 3D image. 11. Image processing apparatus according to claim 1, further comprising a user input for enabling a user to modify the common viewpoint of the rendering. 12. Image processing apparatus according to claim 1, wherein the first time-series of 3D images constitutes a baseline exam of a patient showing perfusion of an organ and/or tissue of the patient at a baseline date, and the second time-series of 3D images constitutes a follow-up exam of the patient showing the perfusion of the organ and/or tissue of the patient at a follow-up date for enabling the comparative display of the perfusion at the baseline date and the follow-up date. 13. Workstation or imaging apparatus comprising the image processing apparatus according to claim 1. 14. A method comprising:
using a processor for combining a time-series of three-dimensional [3D] images into a single 3D image, using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images; obtaining a first and second time-series of 3D images for generating, using the processor, a respective first and second 3D image; and rendering, from a common viewpoint, the first and the second 3D image in an output image for enabling a comparative display of the change over time of the first and the second time-series of 3D images. 15. A computer program product comprising instructions for causing a processor system to perform the method according to claim 14. | Image processing apparatus 110 comprising a processor 120 for combining a time-series of three-dimensional [3D] images into a single 3D image using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images, an input 130 for obtaining a first and second time-series of 3D images 132 for generating, using the processor, a respective first and second 3D image 122 , and a renderer 140 for rendering, from a common viewpoint 154 , the first and the second 3D image 122 in an output image 162 for enabling comparative display of the change over time of the first and the second time-series of 3D images.1. Image processing apparatus comprising:
a processor for combining a time-series of three-dimensional [3D] images into a single 3D image, using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images; an input for obtaining a first and second time-series of 3D images for generating, using the processor, a respective first and second 3D image; and a renderer for rendering, from a common viewpoint, the first and the second 3D image in an output image for enabling comparative display of the change over time of the first and the second time-series of 3D images. 2. Image processing apparatus according to claim 1, wherein the processor is arranged for using a further encoding function, wherein the further encoding function differs from the encoding function for differently encoding said change over time in respective co-located voxels of the time-series of 3D images, and wherein the processor is arranged for:
generating, using the encoding function, a first intermediate 3D image from the first time-series of 3D images and a second intermediate 3D image from the second time-series of 3D images; generating, using the further encoding function, a third intermediate 3D image from the first time-series of 3D images and a fourth intermediate 3D image from the second time-series of 3D images; and generating the first and the second 3D image in dependence on the first intermediate 3D image, the second intermediate 3D image, the third intermediate 3D image and the fourth intermediate 3D image. 3. Image processing apparatus according to claim 2, wherein the processor is arranged for (i) generating the first 3D image as a difference between the first intermediate 3D image and the second intermediate 3D image, and (ii) generating the second 3D image as the difference between the third intermediate 3D image and the fourth intermediate 3D image. 4. Image processing apparatus according to claim 3, wherein the renderer is arranged for (i) using an image fusion process to combine the first and the second 3D image into a fused 3D image, and (ii) rendering the fused 3D image in the output image. 5. Image processing apparatus according to claim 4, wherein the image fusion process comprises (i) mapping voxel values of the first 3D image to at least one of the group of: a hue, a saturation, an opacity of the voxel values of the fused 3D image, and (ii) mapping the voxel values of the second 3D image to at least another one out of said group. 6. Image processing apparatus according to claim 3, wherein the processor is arranged for using a registration process for obtaining the first and the second 3D image as being mutually registered 3D images. 7. Image processing apparatus according to claim 6, wherein the processor is arranged for evaluating a result of the registration process for, instead of rendering the fused 3D image in the output image, rendering the first and the second 3D image in separate viewports in the output image for obtaining a side-by-side rendering of the first and the second 3D image if the registration process fails. 8. Image processing apparatus according to claim 2, wherein the processor is arranged for (i) generating the first 3D image as a combination of the first intermediate 3D image and the third intermediate 3D image, and (ii) generating the second 3D image as the combination of the second intermediate 3D image and the fourth intermediate 3D image. 9. Image processing apparatus according to claim 8, wherein the processor is arranged for using an image fusion process for said generating of the first 3D image and/or said generating of the second 3D image. 10. Image processing apparatus according to claim 8, wherein the renderer is arranged for (i) rendering the first 3D image in a first viewport in the output image, and (ii) rendering the second 3D image in a second viewport in the output image, for obtaining a side-by-side rendering of the first and the second 3D image. 11. Image processing apparatus according to claim 1, further comprising a user input for enabling a user to modify the common viewpoint of the rendering. 12. Image processing apparatus according to claim 1, wherein the first time-series of 3D images constitutes a baseline exam of a patient showing perfusion of an organ and/or tissue of the patient at a baseline date, and the second time-series of 3D images constitutes a follow-up exam of the patient showing the perfusion of the organ and/or tissue of the patient at a follow-up date for enabling the comparative display of the perfusion at the baseline date and the follow-up date. 13. Workstation or imaging apparatus comprising the image processing apparatus according to claim 1. 14. A method comprising:
using a processor for combining a time-series of three-dimensional [3D] images into a single 3D image, using an encoding function, the encoding function being arranged for encoding, in voxels of the single 3D image, a change over time in respective co-located voxels of the time-series of 3D images; obtaining a first and second time-series of 3D images for generating, using the processor, a respective first and second 3D image; and rendering, from a common viewpoint, the first and the second 3D image in an output image for enabling a comparative display of the change over time of the first and the second time-series of 3D images. 15. A computer program product comprising instructions for causing a processor system to perform the method according to claim 14. | 2,600 |
9,735 | 9,735 | 15,067,837 | 2,649 | A processor may be configured to detect an electronic tag in proximity to a vehicle wireless receiver. The processor may also be configured to wirelessly receive communication provider account information from the electronic tag via the receiver. The processor may be additionally configured to provide the communication provider account information to a vehicle telematics module, including an onboard modem to enable provision of in-vehicle telematics services, through the modem, using the communication provider account information. | 1. A system comprising:
a processor configured to: enable a vehicle modem to provide connectivity for vehicle telematics services based on communication provider account information wirelessly provided to the processor from an electronic tag through a wireless receiver. 2. The system of claim 1, wherein the wireless receiver is a near field communication receiver. 3. The system of claim 1, wherein the wireless receiver is a radio frequency identification receiver. 4. The system of claim 1, wherein the wireless receiver is a BLUETOOTH low energy receiver. 5. The system of claim 1, wherein the communication provider account information includes a cellular service provider identification. 6. The system of claim 1, wherein the processor is further configured to enable the vehicle telematics services after detecting that the electronic tag was brought into detectable proximity to the wireless receiver for a first time. 7. The system of claim 6, wherein the processor is further configured to disable the vehicle telematics services after detecting that the electronic tag was brought into detectable proximity to the wireless receiver for a second time. 8. The system of claim 7, wherein the processor is further configured to delete the communication provider account information from a vehicle memory in conjunction with disabling the vehicle telematics services. 9. The system of claim 1, wherein the processor is configured to delete the communication provider account information from a vehicle memory upon vehicle power-down. 10. The system of claim 1, wherein the processor is configured to delete the communication provider account information when the electronic tag is no longer detectable by the wireless receiver. 11. A computer-implemented method comprising:
detecting an electronic tag in proximity to a vehicle wireless receiver for a first time; wirelessly receiving communication provider account information from the electronic tag via the vehicle wireless receiver; providing the communication provider account information to a vehicle telematics module having an onboard modem; and providing in-vehicle telematics connectivity through the onboard modem using the communication provider account information. 12. The method of claim 11, wirelessly receiving comprising receiving communication provider account information using near field communication. 13. The method of claim 11, wirelessly receiving comprising receiving communication provider account information using radio frequency identification. 14. The method of claim 11, wirelessly receiving comprising receiving communication provider account information using BLUETOOTH low energy. 15. The method of claim 11, further comprising disabling the in-vehicle telematics services after detecting the electronic tag in proximity to the vehicle wireless receiver for a second time. 16. The method of claim 11, further comprising deleting the communication provider account information from a vehicle memory in conjunction with disabling the in-vehicle telematics services. 17. The method of claim 11, further comprising deleting the communication provider account information from a vehicle memory upon vehicle power-down. 18. The method of claim 11, further comprising deleting the communication provider account information when the electronic tag is no longer detectable by the vehicle wireless receiver. 19. A non-transitory computer-readable storage medium storing instructions which, when executed by a processor cause the processor to perform a method comprising:
wirelessly receiving communication provider account information from an electronic tag in detectable proximity to a vehicle wireless receiver; providing the communication provider account information to a vehicle telematics module having an onboard modem; providing in-vehicle telematics connectivity through the modem using the communication provider account information; and deleting the communication provider account information from a vehicle memory upon detection of a predefined deletion condition. 20. The storage medium of claim 19, wherein the predefined deletion condition includes at least one of:
a vehicle power-down; the electronic tag being brought into detectable proximity for a second time, after leaving detectable proximity; or the electronic tag leaving detectable proximity. | A processor may be configured to detect an electronic tag in proximity to a vehicle wireless receiver. The processor may also be configured to wirelessly receive communication provider account information from the electronic tag via the receiver. The processor may be additionally configured to provide the communication provider account information to a vehicle telematics module, including an onboard modem to enable provision of in-vehicle telematics services, through the modem, using the communication provider account information.1. A system comprising:
a processor configured to: enable a vehicle modem to provide connectivity for vehicle telematics services based on communication provider account information wirelessly provided to the processor from an electronic tag through a wireless receiver. 2. The system of claim 1, wherein the wireless receiver is a near field communication receiver. 3. The system of claim 1, wherein the wireless receiver is a radio frequency identification receiver. 4. The system of claim 1, wherein the wireless receiver is a BLUETOOTH low energy receiver. 5. The system of claim 1, wherein the communication provider account information includes a cellular service provider identification. 6. The system of claim 1, wherein the processor is further configured to enable the vehicle telematics services after detecting that the electronic tag was brought into detectable proximity to the wireless receiver for a first time. 7. The system of claim 6, wherein the processor is further configured to disable the vehicle telematics services after detecting that the electronic tag was brought into detectable proximity to the wireless receiver for a second time. 8. The system of claim 7, wherein the processor is further configured to delete the communication provider account information from a vehicle memory in conjunction with disabling the vehicle telematics services. 9. The system of claim 1, wherein the processor is configured to delete the communication provider account information from a vehicle memory upon vehicle power-down. 10. The system of claim 1, wherein the processor is configured to delete the communication provider account information when the electronic tag is no longer detectable by the wireless receiver. 11. A computer-implemented method comprising:
detecting an electronic tag in proximity to a vehicle wireless receiver for a first time; wirelessly receiving communication provider account information from the electronic tag via the vehicle wireless receiver; providing the communication provider account information to a vehicle telematics module having an onboard modem; and providing in-vehicle telematics connectivity through the onboard modem using the communication provider account information. 12. The method of claim 11, wirelessly receiving comprising receiving communication provider account information using near field communication. 13. The method of claim 11, wirelessly receiving comprising receiving communication provider account information using radio frequency identification. 14. The method of claim 11, wirelessly receiving comprising receiving communication provider account information using BLUETOOTH low energy. 15. The method of claim 11, further comprising disabling the in-vehicle telematics services after detecting the electronic tag in proximity to the vehicle wireless receiver for a second time. 16. The method of claim 11, further comprising deleting the communication provider account information from a vehicle memory in conjunction with disabling the in-vehicle telematics services. 17. The method of claim 11, further comprising deleting the communication provider account information from a vehicle memory upon vehicle power-down. 18. The method of claim 11, further comprising deleting the communication provider account information when the electronic tag is no longer detectable by the vehicle wireless receiver. 19. A non-transitory computer-readable storage medium storing instructions which, when executed by a processor cause the processor to perform a method comprising:
wirelessly receiving communication provider account information from an electronic tag in detectable proximity to a vehicle wireless receiver; providing the communication provider account information to a vehicle telematics module having an onboard modem; providing in-vehicle telematics connectivity through the modem using the communication provider account information; and deleting the communication provider account information from a vehicle memory upon detection of a predefined deletion condition. 20. The storage medium of claim 19, wherein the predefined deletion condition includes at least one of:
a vehicle power-down; the electronic tag being brought into detectable proximity for a second time, after leaving detectable proximity; or the electronic tag leaving detectable proximity. | 2,600 |
9,736 | 9,736 | 14,276,518 | 2,628 | An embodiment provides a method, including: capturing, using an image capture device, image data; analyzing, using a processor, the image data to identify a gesture control; determining, using a processor, the gesture control was inadvertent; and disregarding, using a processor, the gesture control. Other aspects are described and claimed. | 1. A method, comprising:
capturing, using an image capture device, image data; analyzing, using a processor, the image data to identify a gesture control; determining, using a processor, the gesture control was inadvertent; and disregarding, using a processor, the gesture control. 2. The method of claim 1, wherein:
the determining comprises determining that a user feature location of the image data is outside of a predetermined user feature location. 3. The method of claim 2, wherein the user feature location is selected from the group consisting of a nose location, an eye location, a hand location, and a head location. 4. The method of claim 1, comprising:
capturing, using an image capture device, other image data; and analyzing the other image data to identify another gesture control; wherein: the determining comprises determining a time between receipt of the gesture control and the another gesture control is below a predetermined time threshold. 5. The method of claim 1, comprising:
detecting, using a processor, a user input provided through a non-gesture input device; wherein the determining comprises determining a conflict between the gesture control and the user input provided through a non-gesture input device. 6. The method of claim 5, wherein the non-gesture input device is selected from the group consisting of a mouse, a keyboard, a voice input component, and a touch screen. 7. The method of claim 1, further comprising:
accessing underlying application context data; wherein the determining comprises determining the gesture control conflicts with the underlying application context data. 8. The method of claim 7, wherein the underlying application context data is derived from an underlying application controllable by gesture input. 9. The method of claim 7, wherein the underlying application context data is derived from an underlying application not controllable by gesture input. 10. The method of claim 1, wherein the determining comprises conducting at least two checks for context conflict between the identified gesture control and available context data. 11. An apparatus, comprising:
an image capture device; a processor operatively coupled to the image capture device; and a memory storing instructions that are executable by the processor to: capturing, using the image capture device, image data; analyze the image data to identify a gesture control; determine the gesture control was inadvertent; and disregard the gesture control. 12. The apparatus of claim 11, wherein:
to determine comprises determining that a user feature location of the image data is outside of a predetermined user feature location. 13. The apparatus of claim 12, wherein the user feature location is selected from the group consisting of a nose location, an eye location, a hand location, and a head location. 14. The apparatus of claim 11, wherein the instructions are further executable by the processor to:
capture, using the image capture device, other image data; and analyze the other image data to identify another gesture control; wherein: to determine comprises determining a time between receipt of the gesture control and the another gesture control is below a predetermined time threshold. 15. The apparatus of claim 11, wherein the instructions are further executable by the processor to:
detect a user input provided through a non-gesture input device; wherein to determine comprises determining a conflict between the gesture control and the user input provided through a non-gesture input device. 16. The apparatus of claim 15, wherein the non-gesture input device is selected from the group consisting of a mouse, a keyboard, a voice input component, and a touch screen. 17. The apparatus of claim 11, wherein the instructions are further executable by the processor to:
access underlying application context data; wherein to determine comprises determining the gesture control conflicts with the underlying application context data. 18. The apparatus of claim 17, wherein the underlying application context data is derived from an underlying application controllable by gesture input. 19. The apparatus of claim 17, wherein the underlying application context data is derived from an underlying application not controllable by gesture input. 20. A product, comprising:
a computer readable storage device storing code therewith, the code being executable by a processor and comprising: code that captures, using an image capture device, image data; code that analyzes, using a processor, the image data to identify a gesture control; code that determines, using a processor, the gesture control was inadvertent; and code that disregards, using a processor, the gesture control. | An embodiment provides a method, including: capturing, using an image capture device, image data; analyzing, using a processor, the image data to identify a gesture control; determining, using a processor, the gesture control was inadvertent; and disregarding, using a processor, the gesture control. Other aspects are described and claimed.1. A method, comprising:
capturing, using an image capture device, image data; analyzing, using a processor, the image data to identify a gesture control; determining, using a processor, the gesture control was inadvertent; and disregarding, using a processor, the gesture control. 2. The method of claim 1, wherein:
the determining comprises determining that a user feature location of the image data is outside of a predetermined user feature location. 3. The method of claim 2, wherein the user feature location is selected from the group consisting of a nose location, an eye location, a hand location, and a head location. 4. The method of claim 1, comprising:
capturing, using an image capture device, other image data; and analyzing the other image data to identify another gesture control; wherein: the determining comprises determining a time between receipt of the gesture control and the another gesture control is below a predetermined time threshold. 5. The method of claim 1, comprising:
detecting, using a processor, a user input provided through a non-gesture input device; wherein the determining comprises determining a conflict between the gesture control and the user input provided through a non-gesture input device. 6. The method of claim 5, wherein the non-gesture input device is selected from the group consisting of a mouse, a keyboard, a voice input component, and a touch screen. 7. The method of claim 1, further comprising:
accessing underlying application context data; wherein the determining comprises determining the gesture control conflicts with the underlying application context data. 8. The method of claim 7, wherein the underlying application context data is derived from an underlying application controllable by gesture input. 9. The method of claim 7, wherein the underlying application context data is derived from an underlying application not controllable by gesture input. 10. The method of claim 1, wherein the determining comprises conducting at least two checks for context conflict between the identified gesture control and available context data. 11. An apparatus, comprising:
an image capture device; a processor operatively coupled to the image capture device; and a memory storing instructions that are executable by the processor to: capturing, using the image capture device, image data; analyze the image data to identify a gesture control; determine the gesture control was inadvertent; and disregard the gesture control. 12. The apparatus of claim 11, wherein:
to determine comprises determining that a user feature location of the image data is outside of a predetermined user feature location. 13. The apparatus of claim 12, wherein the user feature location is selected from the group consisting of a nose location, an eye location, a hand location, and a head location. 14. The apparatus of claim 11, wherein the instructions are further executable by the processor to:
capture, using the image capture device, other image data; and analyze the other image data to identify another gesture control; wherein: to determine comprises determining a time between receipt of the gesture control and the another gesture control is below a predetermined time threshold. 15. The apparatus of claim 11, wherein the instructions are further executable by the processor to:
detect a user input provided through a non-gesture input device; wherein to determine comprises determining a conflict between the gesture control and the user input provided through a non-gesture input device. 16. The apparatus of claim 15, wherein the non-gesture input device is selected from the group consisting of a mouse, a keyboard, a voice input component, and a touch screen. 17. The apparatus of claim 11, wherein the instructions are further executable by the processor to:
access underlying application context data; wherein to determine comprises determining the gesture control conflicts with the underlying application context data. 18. The apparatus of claim 17, wherein the underlying application context data is derived from an underlying application controllable by gesture input. 19. The apparatus of claim 17, wherein the underlying application context data is derived from an underlying application not controllable by gesture input. 20. A product, comprising:
a computer readable storage device storing code therewith, the code being executable by a processor and comprising: code that captures, using an image capture device, image data; code that analyzes, using a processor, the image data to identify a gesture control; code that determines, using a processor, the gesture control was inadvertent; and code that disregards, using a processor, the gesture control. | 2,600 |
9,737 | 9,737 | 14,402,185 | 2,667 | A system and method for receiving a medical image, receiving an adaptation of a model of a physical structure, the adaptation relating to the medical image, determining an image quantity of the medical image at each of a plurality of vertices of the adaptation and aggregating the plurality of image quantities to determine an evaluation metric. | 1. A method, comprising:
receiving a medical image of a physical structure of a patient; receiving an adaptation of a mesh model of the physical structure, to the medical image; determining an image quantity based on the intensity values of the medical image at each of a plurality of vertices of the adapted mesh model; and aggregating the plurality of image quantities to determine an adaptation quality metric. 2. The method of claim 1, wherein the aggregating comprises determining a mean average. 3. The method of claim 1, wherein the medical image is one of an MRI, a CT, and an ultrasound. 4. The method of claim 1, wherein the mesh model is a deformable brain model. 5. The method of claim 1, further comprising:
comparing the adaptation quality metric to a threshold value; approving the adaptation if the adaptation quality metric is greater than or equal to the threshold value; and rejecting the adaptation if the adaptation quality metric is less than the threshold value. 6. The method of claim 1, wherein the image quantity is one of an image intensity, an image gradient, and a gradient magnitude. 7. The method of claim 1, wherein the aggregating comprises considering each of the vertices separately to determine separate adaptation quality metrics for each of a plurality of subsets of the medical image. 8. A system, comprising:
a memory storing a medical image of a physical structure of a patient and an adaptation of a mesh model of the physical structure, to the medical image; and a processor determining an image quantity based on the intensity values of the medical image at each of a plurality of vertices of the adapted mesh model and aggregating the plurality of image quantities to determine an adaptation quality metric. 9. The system of claim 8, further comprising:
an imaging apparatus generating the medical image. 10. The system of claim 9, wherein, after the imaging apparatus generates the medical image, the processor adapts the mesh model to the medical image to generate the adaptation. 11. (canceled) 12. (canceled) 13. (canceled) 14. The system of claim 8, wherein the processor further compares the adaptation quality metric to a threshold value, approves the adaptation if the adaptation quality metric is greater than or equal to the threshold value, and rejects the adaptation if the adaptation quality metric is less than the threshold value. 15. The system of claim 8, wherein the image quantity is one of an image intensity, an image gradient, and a gradient magnitude. 16. The system of claim 8, wherein the processor considers each of the vertices separately to determine separate adaptation quality metrics for each of a plurality of subsets of the medical image. 17. A non-transitory computer-readable storage medium storing a set of instructions executable by a processor, to enable the processor to perform a method comprising:
receiving a medical image of a physical structure of a patient; receiving an adaptation of a mesh model of the physical structure, to the medical image; determining an image quantity of the intensity value of the medical image at each of a plurality of vertices of the adapted mesh model; and aggregating the plurality of image quantities to determine an adaptation quality metric. 18. The non-transitory computer-readable storage medium of claim 17, wherein the method further comprises:
comparing the adaptation quality metric to a threshold value; approving the adaptation if the adaptation quality metric is greater than or equal to the threshold value; and rejecting the adaptation if the adaptation quality metric is less than the threshold value. 19. (canceled) 20. (canceled) | A system and method for receiving a medical image, receiving an adaptation of a model of a physical structure, the adaptation relating to the medical image, determining an image quantity of the medical image at each of a plurality of vertices of the adaptation and aggregating the plurality of image quantities to determine an evaluation metric.1. A method, comprising:
receiving a medical image of a physical structure of a patient; receiving an adaptation of a mesh model of the physical structure, to the medical image; determining an image quantity based on the intensity values of the medical image at each of a plurality of vertices of the adapted mesh model; and aggregating the plurality of image quantities to determine an adaptation quality metric. 2. The method of claim 1, wherein the aggregating comprises determining a mean average. 3. The method of claim 1, wherein the medical image is one of an MRI, a CT, and an ultrasound. 4. The method of claim 1, wherein the mesh model is a deformable brain model. 5. The method of claim 1, further comprising:
comparing the adaptation quality metric to a threshold value; approving the adaptation if the adaptation quality metric is greater than or equal to the threshold value; and rejecting the adaptation if the adaptation quality metric is less than the threshold value. 6. The method of claim 1, wherein the image quantity is one of an image intensity, an image gradient, and a gradient magnitude. 7. The method of claim 1, wherein the aggregating comprises considering each of the vertices separately to determine separate adaptation quality metrics for each of a plurality of subsets of the medical image. 8. A system, comprising:
a memory storing a medical image of a physical structure of a patient and an adaptation of a mesh model of the physical structure, to the medical image; and a processor determining an image quantity based on the intensity values of the medical image at each of a plurality of vertices of the adapted mesh model and aggregating the plurality of image quantities to determine an adaptation quality metric. 9. The system of claim 8, further comprising:
an imaging apparatus generating the medical image. 10. The system of claim 9, wherein, after the imaging apparatus generates the medical image, the processor adapts the mesh model to the medical image to generate the adaptation. 11. (canceled) 12. (canceled) 13. (canceled) 14. The system of claim 8, wherein the processor further compares the adaptation quality metric to a threshold value, approves the adaptation if the adaptation quality metric is greater than or equal to the threshold value, and rejects the adaptation if the adaptation quality metric is less than the threshold value. 15. The system of claim 8, wherein the image quantity is one of an image intensity, an image gradient, and a gradient magnitude. 16. The system of claim 8, wherein the processor considers each of the vertices separately to determine separate adaptation quality metrics for each of a plurality of subsets of the medical image. 17. A non-transitory computer-readable storage medium storing a set of instructions executable by a processor, to enable the processor to perform a method comprising:
receiving a medical image of a physical structure of a patient; receiving an adaptation of a mesh model of the physical structure, to the medical image; determining an image quantity of the intensity value of the medical image at each of a plurality of vertices of the adapted mesh model; and aggregating the plurality of image quantities to determine an adaptation quality metric. 18. The non-transitory computer-readable storage medium of claim 17, wherein the method further comprises:
comparing the adaptation quality metric to a threshold value; approving the adaptation if the adaptation quality metric is greater than or equal to the threshold value; and rejecting the adaptation if the adaptation quality metric is less than the threshold value. 19. (canceled) 20. (canceled) | 2,600 |
9,738 | 9,738 | 14,289,522 | 2,658 | In general, techniques are described for compressing decomposed representations of a sound field. A device comprising one or more processors may be configured to perform the techniques. The one or more processors may be configured to obtain a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. | 1. A method comprising:
obtaining a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 2. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field specifying a prediction mode used when compressing the spatial component. 3. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, Huffman table information specifying a Huffman table used when compressing the spatial component. 4. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 5. The method of claim 4, wherein the value comprises an nbits value. 6. The method of claim 4,
wherein the bitstream comprises a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 7. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 8. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a sign bit identifying whether the spatial component is a positive value or a negative value. 9. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a residual value of the spatial component. 10. The method of claim 1, wherein obtaining the bitstream comprises generating the bitstream with a bitstream generation device. 11. The method of claim 1, wherein obtaining the bitstream comprises obtaining the bitstream with a bitstream extraction device. 12. The method of claim 1, wherein the vector based synthesis comprises a singular value decomposition. 13. A device comprising:
one or more processors configured to obtain a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 14. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field specifying a prediction mode used when compressing the spatial component. 15. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, Huffman table information specifying a Huffman table used when compressing the spatial component. 16. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 17. The device of claim 16, wherein the value comprises an nbits value. 18. The device of claim 16,
wherein the bitstream comprises a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 19. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 20. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a sign bit identifying whether the spatial component is a positive value or a negative value. 21. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a residual value of the spatial component. 22. The device of claim 13, wherein the device comprises an audio encoding device a bitstream generation device. 23. The device of claim 13, wherein the device comprises an audio decoding device. 24. The device of claim 13, wherein the vector based synthesis comprises a singular value decomposition. 25. A device comprising:
means for obtaining a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients; and means for storing the bitstream. 26. A non-transitory computer-readable storage medium having stored thereon instructions that when executed cause one or more processors to obtain a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 27. A method comprising:
generating a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 28. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a field specifying a prediction mode used when compressing the spatial component. 29. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include Huffman table information specifying a Huffman table used when compressing the spatial component. 30. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 31. The method of claim 30, wherein the value comprises an nbits value. 32. The method of claim 30,
wherein generating the bitstream comprises generating the bitstream to include a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 33. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 34. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a sign bit identifying whether the spatial component is a positive value or a negative value. 35. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a Huffman code to represent a residual value of the spatial component. 36. The method of claim 27, wherein the vector based synthesis comprises a singular value decomposition. 37. A device comprising:
one or more processors configured to generate a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 38. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a field specifying a prediction mode used when compressing the spatial component. 39. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include Huffman table information specifying a Huffman table used when compressing the spatial component. 40. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 41. The device of claim 40, wherein the value comprises an nbits value. 42. The device of claim 40,
wherein the one or more processors are configured to generate the bitstream to include a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 43. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 44. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a sign bit identifying whether the spatial component is a positive value or a negative value. 45. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a Huffman code to represent a residual value of the spatial component. 46. The device of claim 37, wherein the vector based synthesis comprises a singular value decomposition. 47. A device comprising:
means for generating a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients; and means for storing the bitstream. 48. A non-transitory computer-readable storage medium comprising instructions that when executed cause one or more processors to:
generate a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. | In general, techniques are described for compressing decomposed representations of a sound field. A device comprising one or more processors may be configured to perform the techniques. The one or more processors may be configured to obtain a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients.1. A method comprising:
obtaining a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 2. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field specifying a prediction mode used when compressing the spatial component. 3. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, Huffman table information specifying a Huffman table used when compressing the spatial component. 4. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 5. The method of claim 4, wherein the value comprises an nbits value. 6. The method of claim 4,
wherein the bitstream comprises a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 7. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 8. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a sign bit identifying whether the spatial component is a positive value or a negative value. 9. The method of claim 1, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a residual value of the spatial component. 10. The method of claim 1, wherein obtaining the bitstream comprises generating the bitstream with a bitstream generation device. 11. The method of claim 1, wherein obtaining the bitstream comprises obtaining the bitstream with a bitstream extraction device. 12. The method of claim 1, wherein the vector based synthesis comprises a singular value decomposition. 13. A device comprising:
one or more processors configured to obtain a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 14. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field specifying a prediction mode used when compressing the spatial component. 15. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, Huffman table information specifying a Huffman table used when compressing the spatial component. 16. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 17. The device of claim 16, wherein the value comprises an nbits value. 18. The device of claim 16,
wherein the bitstream comprises a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 19. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 20. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a sign bit identifying whether the spatial component is a positive value or a negative value. 21. The device of claim 13, wherein the compressed version of the spatial component is represented in the bitstream using, at least in part, a Huffman code to represent a residual value of the spatial component. 22. The device of claim 13, wherein the device comprises an audio encoding device a bitstream generation device. 23. The device of claim 13, wherein the device comprises an audio decoding device. 24. The device of claim 13, wherein the vector based synthesis comprises a singular value decomposition. 25. A device comprising:
means for obtaining a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients; and means for storing the bitstream. 26. A non-transitory computer-readable storage medium having stored thereon instructions that when executed cause one or more processors to obtain a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 27. A method comprising:
generating a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 28. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a field specifying a prediction mode used when compressing the spatial component. 29. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include Huffman table information specifying a Huffman table used when compressing the spatial component. 30. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 31. The method of claim 30, wherein the value comprises an nbits value. 32. The method of claim 30,
wherein generating the bitstream comprises generating the bitstream to include a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 33. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 34. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a sign bit identifying whether the spatial component is a positive value or a negative value. 35. The method of claim 27, wherein generating the bitstream comprises generating the bitstream to include a Huffman code to represent a residual value of the spatial component. 36. The method of claim 27, wherein the vector based synthesis comprises a singular value decomposition. 37. A device comprising:
one or more processors configured to generate a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. 38. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a field specifying a prediction mode used when compressing the spatial component. 39. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include Huffman table information specifying a Huffman table used when compressing the spatial component. 40. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a field indicating a value that expresses a quantization step size or a variable thereof used when compressing the spatial component. 41. The device of claim 40, wherein the value comprises an nbits value. 42. The device of claim 40,
wherein the one or more processors are configured to generate the bitstream to include a compressed version of a plurality of spatial components of the sound field of which the compressed version of the spatial component is included, and wherein the value expresses the quantization step size or a variable thereof used when compressing the plurality of spatial components. 43. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a Huffman code to represent a category identifier that identifies a compression category to which the spatial component corresponds. 44. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a sign bit identifying whether the spatial component is a positive value or a negative value. 45. The device of claim 37, wherein the one or more processors are configured to generate the bitstream to include a Huffman code to represent a residual value of the spatial component. 46. The device of claim 37, wherein the vector based synthesis comprises a singular value decomposition. 47. A device comprising:
means for generating a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients; and means for storing the bitstream. 48. A non-transitory computer-readable storage medium comprising instructions that when executed cause one or more processors to:
generate a bitstream comprising a compressed version of a spatial component of a sound field, the spatial component generated by performing a vector based synthesis with respect to a plurality of spherical harmonic coefficients. | 2,600 |
9,739 | 9,739 | 14,478,033 | 2,659 | Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. | 1. A computer-implemented method, comprising:
obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. 2. The method of claim 1, wherein obtaining the textual term comprises obtaining the textual term from a resource stored at a remote computer. 3. The method of claim 1, wherein obtaining the textual term comprises obtaining the textual term from a search query. 4. The method of claim 1, wherein determining the vector representing the phonetic feature of the textual term comprises determining a pronunciation of the textual term. 5. The method of claim 1,
wherein the textual term includes a plurality of characters, wherein determining the vector representing the phonetic feature of the textual term comprises determining a first vector representing a phonetic feature of a subset of the plurality of characters of the textual term, and wherein classifying the textual term comprises:
determining that the subset of the plurality of characters of the textual term is similar to the reference textual term based on the comparing the vector with the reference vector; and
in response to determining that the subset of the plurality of characters of the textual term is similar to the reference textual term, associating a definition of the reference textual term to the subset of the plurality of characters of the textual term. 6. The method of claim 5, comprising:
determining a second vector representing a phonetic feature of a second subset of the plurality of characters of the textual term; comparing the second vector with a second reference vector representing a phonetic feature of a second reference textual term; determining that the second subset of the plurality of characters is similar to the second reference textual term based on comparing the second vector with the second reference vector; and in response to determining that the second subset of the plurality of characters is similar to the second reference textual term, associating a definition of the second reference textual term to the second subset of the plurality of characters of the textual term. 7. The method of claim 1, wherein comparing the vector with the reference vector comprises determining a cosine distance between the vector and the reference vector. 8. The method of claim 7, wherein classifying the textual term comprises:
determining that the cosine distance is within a specific distance; and in response to determining that the cosine distance is within the specific distance, classifying the textual term as being similar to the reference textual term. 9. The method of claim 1, wherein classifying the textual term comprises
determining a likelihood that the textual term is similar to the reference textual term; and. classifying the textual term based on the likelihood at the textual term is similar to the reference textual term. 10. The method of claim 1, wherein classifying the textual term comprises:
determining that the textual term is similar to the reference textual term; and in response to determining that the textual term is similar to the reference textual term, associating a definition of the reference textual term to the textual term. 11. The method of claim 1,
wherein obtaining the textual term comprises obtaining one or more textual terms that are surrounding the textual term, and wherein determining the vector representing the phonetic feature of the textual term comprises determining the vector using (i) the phonetic feature of the textual term and (ii) the one or more textual terms that are surrounding the textual term. 12. A computer-readable medium storing software having stored thereon instructions, which, when executed by one or more computers, cause the one or more computers to perform operations of:
obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. 13. The computer-readable medium of claim 12,
wherein the textual term includes a plurality of characters, wherein determining the vector representing the phonetic feature of the textual term comprises determining a first vector representing a phonetic feature of a subset of the plurality of characters of the textual term, and wherein classifying the textual term comprises:
determining that the subset of the plurality of characters of the textual term is similar to the reference textual term based on the comparing the vector with the reference vector; and
in response to determining that the subset of the plurality of characters of the textual term is similar to the reference textual term, associating a definition of the reference textual term to the subset of the plurality of characters of the textual term. 14. The computer-readable medium of claim 13, wherein the operations comprise:
determining a second vector representing a phonetic feature of a second subset of the plurality of characters of the textual term; comparing the second vector with a second reference vector representing a phonetic feature of a second reference textual term; determining that the second subset of the plurality of characters is similar to the second reference textual term based on comparing the second vector with the second reference vector; and in response to determining that the second subset of the plurality of characters is similar to the second reference textual term, associating a definition of the second reference textual term to the second subset of the plurality of characters of the textual term. 15. The computer-readable medium of claim 12, wherein comparing the vector with the reference vector comprises determining a cosine distance between the vector and the reference vector. 16. The computer-readable medium of claim 12,
wherein obtaining the textual term comprises obtaining one or more textual terms that are surrounding the textual term, and wherein determining the vector representing the phonetic feature of the textual term comprises determining the vector using (i) the phonetic feature of the textual term and (ii) the one or more textual terms that are surrounding the textual term. 17. A system comprising:
one or more processors and one or more computer storage media storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising: obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. 18. The system of claim 17,
wherein the textual term includes a plurality of characters, wherein determining the vector representing the phonetic feature of the textual term comprises determining a first vector representing a phonetic feature of a subset of the plurality of characters of the textual term, and wherein classifying the textual term comprises:
determining that the subset of the plurality of characters of the textual term is similar to the reference textual term based on the comparing the vector with the reference vector; and
in response to determining that the subset of the plurality of characters of the textual term is similar to the reference textual term, associating a definition of the reference textual term to the subset of the plurality of characters of the textual term. 19. The system of claim 18, wherein the operations comprise:
determining a second vector representing a phonetic feature of a second subset of the plurality of characters of the textual term; comparing the second vector with a second reference vector representing a phonetic feature of a second reference textual term; determining that the second subset of the plurality of characters is similar to the second reference textual term based on comparing the second vector with the second reference vector; and in response to determining that the second subset of the plurality of characters is similar to the second reference textual term, associating a definition of the second reference textual term to the second subset of the plurality of characters of the textual term. 20. The system of claim 17,
wherein obtaining the textual term comprises obtaining one or more textual terms that are surrounding the textual term, and wherein determining the vector representing the phonetic feature of the textual term comprises determining the vector using (i) the phonetic feature of the textual term and (ii) the one or more textual terms that are surrounding the textual term. | Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector.1. A computer-implemented method, comprising:
obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. 2. The method of claim 1, wherein obtaining the textual term comprises obtaining the textual term from a resource stored at a remote computer. 3. The method of claim 1, wherein obtaining the textual term comprises obtaining the textual term from a search query. 4. The method of claim 1, wherein determining the vector representing the phonetic feature of the textual term comprises determining a pronunciation of the textual term. 5. The method of claim 1,
wherein the textual term includes a plurality of characters, wherein determining the vector representing the phonetic feature of the textual term comprises determining a first vector representing a phonetic feature of a subset of the plurality of characters of the textual term, and wherein classifying the textual term comprises:
determining that the subset of the plurality of characters of the textual term is similar to the reference textual term based on the comparing the vector with the reference vector; and
in response to determining that the subset of the plurality of characters of the textual term is similar to the reference textual term, associating a definition of the reference textual term to the subset of the plurality of characters of the textual term. 6. The method of claim 5, comprising:
determining a second vector representing a phonetic feature of a second subset of the plurality of characters of the textual term; comparing the second vector with a second reference vector representing a phonetic feature of a second reference textual term; determining that the second subset of the plurality of characters is similar to the second reference textual term based on comparing the second vector with the second reference vector; and in response to determining that the second subset of the plurality of characters is similar to the second reference textual term, associating a definition of the second reference textual term to the second subset of the plurality of characters of the textual term. 7. The method of claim 1, wherein comparing the vector with the reference vector comprises determining a cosine distance between the vector and the reference vector. 8. The method of claim 7, wherein classifying the textual term comprises:
determining that the cosine distance is within a specific distance; and in response to determining that the cosine distance is within the specific distance, classifying the textual term as being similar to the reference textual term. 9. The method of claim 1, wherein classifying the textual term comprises
determining a likelihood that the textual term is similar to the reference textual term; and. classifying the textual term based on the likelihood at the textual term is similar to the reference textual term. 10. The method of claim 1, wherein classifying the textual term comprises:
determining that the textual term is similar to the reference textual term; and in response to determining that the textual term is similar to the reference textual term, associating a definition of the reference textual term to the textual term. 11. The method of claim 1,
wherein obtaining the textual term comprises obtaining one or more textual terms that are surrounding the textual term, and wherein determining the vector representing the phonetic feature of the textual term comprises determining the vector using (i) the phonetic feature of the textual term and (ii) the one or more textual terms that are surrounding the textual term. 12. A computer-readable medium storing software having stored thereon instructions, which, when executed by one or more computers, cause the one or more computers to perform operations of:
obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. 13. The computer-readable medium of claim 12,
wherein the textual term includes a plurality of characters, wherein determining the vector representing the phonetic feature of the textual term comprises determining a first vector representing a phonetic feature of a subset of the plurality of characters of the textual term, and wherein classifying the textual term comprises:
determining that the subset of the plurality of characters of the textual term is similar to the reference textual term based on the comparing the vector with the reference vector; and
in response to determining that the subset of the plurality of characters of the textual term is similar to the reference textual term, associating a definition of the reference textual term to the subset of the plurality of characters of the textual term. 14. The computer-readable medium of claim 13, wherein the operations comprise:
determining a second vector representing a phonetic feature of a second subset of the plurality of characters of the textual term; comparing the second vector with a second reference vector representing a phonetic feature of a second reference textual term; determining that the second subset of the plurality of characters is similar to the second reference textual term based on comparing the second vector with the second reference vector; and in response to determining that the second subset of the plurality of characters is similar to the second reference textual term, associating a definition of the second reference textual term to the second subset of the plurality of characters of the textual term. 15. The computer-readable medium of claim 12, wherein comparing the vector with the reference vector comprises determining a cosine distance between the vector and the reference vector. 16. The computer-readable medium of claim 12,
wherein obtaining the textual term comprises obtaining one or more textual terms that are surrounding the textual term, and wherein determining the vector representing the phonetic feature of the textual term comprises determining the vector using (i) the phonetic feature of the textual term and (ii) the one or more textual terms that are surrounding the textual term. 17. A system comprising:
one or more processors and one or more computer storage media storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to perform operations comprising: obtaining a textual term; determining, by one or more computers, a vector representing a phonetic feature of the textual term; comparing the vector representing the phonetic feature of the textual term with a reference vector representing a phonetic feature of a reference textual term; and classifying the textual term based on the comparing the vector with the reference vector. 18. The system of claim 17,
wherein the textual term includes a plurality of characters, wherein determining the vector representing the phonetic feature of the textual term comprises determining a first vector representing a phonetic feature of a subset of the plurality of characters of the textual term, and wherein classifying the textual term comprises:
determining that the subset of the plurality of characters of the textual term is similar to the reference textual term based on the comparing the vector with the reference vector; and
in response to determining that the subset of the plurality of characters of the textual term is similar to the reference textual term, associating a definition of the reference textual term to the subset of the plurality of characters of the textual term. 19. The system of claim 18, wherein the operations comprise:
determining a second vector representing a phonetic feature of a second subset of the plurality of characters of the textual term; comparing the second vector with a second reference vector representing a phonetic feature of a second reference textual term; determining that the second subset of the plurality of characters is similar to the second reference textual term based on comparing the second vector with the second reference vector; and in response to determining that the second subset of the plurality of characters is similar to the second reference textual term, associating a definition of the second reference textual term to the second subset of the plurality of characters of the textual term. 20. The system of claim 17,
wherein obtaining the textual term comprises obtaining one or more textual terms that are surrounding the textual term, and wherein determining the vector representing the phonetic feature of the textual term comprises determining the vector using (i) the phonetic feature of the textual term and (ii) the one or more textual terms that are surrounding the textual term. | 2,600 |
9,740 | 9,740 | 14,915,716 | 2,636 | In an FTTDP optical fibre network, Distribution point units are reverse powered by customer premises equipment and therefore liable to power failure. When the distribution point unit loses power it is unavailable to respond to requests for performance or metric data. A persistent manager agent functions as a proxy for the distribution point unit, gathering metric and performance data from the distribution point unit using at least one EOC channel and handling requests for performance and metric data issued from other network devices. The persistent manager can also schedule downloads of firmware and configuration data to the distribution point unit. | 1. An optical fibre network comprising an optical fibre network section, a plurality of distribution nodes linking the optical fibre network section to a plurality of customer premises units via a plurality of electrical wired segments, each distribution node being electrically powered by at least one of the customer premises units, wherein the optical fibre network section further includes at least one proxy management unit in communication with at least one of the plurality of distribution nodes and operable to receive management data from said distribution unit and to process requests for information on behalf of said distribution unit. 2. A management apparatus for receiving status information from a network node and answering requests for status information from other devices on behalf of said network node. | In an FTTDP optical fibre network, Distribution point units are reverse powered by customer premises equipment and therefore liable to power failure. When the distribution point unit loses power it is unavailable to respond to requests for performance or metric data. A persistent manager agent functions as a proxy for the distribution point unit, gathering metric and performance data from the distribution point unit using at least one EOC channel and handling requests for performance and metric data issued from other network devices. The persistent manager can also schedule downloads of firmware and configuration data to the distribution point unit.1. An optical fibre network comprising an optical fibre network section, a plurality of distribution nodes linking the optical fibre network section to a plurality of customer premises units via a plurality of electrical wired segments, each distribution node being electrically powered by at least one of the customer premises units, wherein the optical fibre network section further includes at least one proxy management unit in communication with at least one of the plurality of distribution nodes and operable to receive management data from said distribution unit and to process requests for information on behalf of said distribution unit. 2. A management apparatus for receiving status information from a network node and answering requests for status information from other devices on behalf of said network node. | 2,600 |
9,741 | 9,741 | 14,051,419 | 2,647 | Traffic flows in a wireless network may be optimized based on a current state of the wireless network as well as based on information received from mobile devices attached to the wireless network. In one implementation, a method may include receiving, from mobile devices attached to the wireless network, values for parameters associated with applications that are executed by the mobile devices. The method may further include receiving, from network elements in the wireless network, information relating to a state of the wireless network; and determining based on the values for the parameters associated with the applications and based on the information relating to the state of the wireless network, modifications to an operation of the wireless network to optimize transmission of the traffic flows in the wireless network with respect to the parameters. | 1. A method comprising:
receiving, by one or more computing devices and from mobile devices attached to a wireless network, values for parameters associated with applications that are executed by the mobile devices, the parameters defining performance of traffic flows being used by the applications; receiving, by the one or more computing devices and from network elements in the wireless network, information relating to a state of the wireless network; and determining, by the one or more computing devices, based on the values for the parameters associated with the applications and based on the information relating to the state of the wireless network, modifications to an operation of the wireless network to optimize transmission of the traffic flows in the wireless network with respect to the parameters; and controlling, by the one or more computing devices, one or more of the network elements in the wireless network to implement the determined modifications to the operation of the wireless network. 2. The method of claim 1, wherein the parameters include parameters measuring bandwidth, latency, or jitter of the traffic flows. 3. The method of claim 2, wherein the information relating to the state of the wireless network includes information describing radio resource usage at a base station of the wireless network. 4. The method of claim 3, wherein the information describing the radio resource usage at the base station of the wireless network quantifies the radio resource usage at the base station on a per Quality of Service (QoS) Class of Identifier (QCI) basis. 5. The method of claim 3, wherein the modifications to the operation of the wireless network include:
modifications to reduce a bit rate of a set of traffic flows, of the traffic flows being used by the applications, that pass through the base station and that are associated with video streaming applications. 6. The method of claim 1, wherein the determination of the modifications to the operation of the wireless network includes:
determining the modifications based on comparisons of the values for the parameters associated with the applications to threshold values that define desired parameter values associated with the applications. 7. The method of claim 1, further comprising:
receiving a message, from a base station in the wireless network, indicating that a radio resource usage level of the base station is above a threshold value; wherein the determination of the modifications to the operation of the wireless network further include:
determining the modifications to reduce the radio resource usage level of the base station. 8. The method of claim 7, wherein the modifications to the operation of the wireless network are selected to affect traffic flows of particular applications being executed by the mobile devices. 9. A device comprising:
a memory to store instructions; and at least one processor to execute the instructions stored by the memory to:
receive, from mobile devices attached to a wireless network, values for parameters associated with applications that are executed by the mobile devices, the parameters defining performance of traffic flows being used by the applications;
receive, from network elements in the wireless network, information relating to a state of the wireless network; and
determine, based on the values for the parameters associated with the applications and based on the information relating to the state of the wireless network, modifications to an operation of the wireless network to optimize transmission of the traffic flows in the wireless network with respect to the parameters; and
control the network elements in the wireless network to implement the determined modifications to the operation of the wireless network. 10. The device of claim 9, wherein the parameters include parameters measuring bandwidth, latency, or jitter of the traffic flows. 11. The device of claim 10, wherein the information relating to the state of the wireless network includes information describing radio resource usage at a base station of the wireless network. 12. The device of claim 11, wherein the information describing the radio resource usage at the base station of the wireless network quantifies the radio resource usage at the base station on a per Quality of Service (QoS) Class of Identifier (QCI) basis. 13. The device of claim 11, wherein the modifications to the operation of the wireless network include:
modifications to reduce a bit rate of a set of traffic flows, of the traffic flows being used by the applications, that pass through the base station and that are associated with video streaming applications. 14. The device of claim 9, wherein the at least one processor, when determining the modifications to the operation of the wireless network, further executes instructions in the memory to:
determine the modifications based on comparisons of the values for the parameters associated with the applications to threshold values that define desired parameter values associated with the applications. 15. The device of claim 9, wherein the at least one processor is further to:
receive a message, from a base station in the wireless network, indicating that a radio resource usage level of the base station is above a threshold value; wherein the determination of the modifications to the operation of the wireless network further include:
determining the modifications to reduce the radio resource usage level of the base station. 16. The device of claim 15, wherein the modifications to the operation of the wireless network are selected to affect traffic flows of particular applications being executed by the mobile devices. 17. A method comprising:
monitoring, by a mobile device connected to a wireless network, traffic flows associated with applications executing at the mobile device, the monitoring including:
determining values for parameters, corresponding to the traffic flows, that relate to a quality of the traffic flows in the wireless network;
storing, by the mobile device, desired values for the parameters; and transmitting, by the mobile device and to a server, based on one or more of the determined values for the parameters falling below corresponding desired values for the parameters, the determined values for the parameters. 18. The method of claim 17, wherein the desired values of the parameters are stored on a per-application basis. 19. The method of claim 17, wherein the parameters include parameters relating to bandwidth, latency, or jitter of the traffic flows. 20. The method of claim 17, wherein the wireless network includes a Long Term Evolution (LTE) based network. | Traffic flows in a wireless network may be optimized based on a current state of the wireless network as well as based on information received from mobile devices attached to the wireless network. In one implementation, a method may include receiving, from mobile devices attached to the wireless network, values for parameters associated with applications that are executed by the mobile devices. The method may further include receiving, from network elements in the wireless network, information relating to a state of the wireless network; and determining based on the values for the parameters associated with the applications and based on the information relating to the state of the wireless network, modifications to an operation of the wireless network to optimize transmission of the traffic flows in the wireless network with respect to the parameters.1. A method comprising:
receiving, by one or more computing devices and from mobile devices attached to a wireless network, values for parameters associated with applications that are executed by the mobile devices, the parameters defining performance of traffic flows being used by the applications; receiving, by the one or more computing devices and from network elements in the wireless network, information relating to a state of the wireless network; and determining, by the one or more computing devices, based on the values for the parameters associated with the applications and based on the information relating to the state of the wireless network, modifications to an operation of the wireless network to optimize transmission of the traffic flows in the wireless network with respect to the parameters; and controlling, by the one or more computing devices, one or more of the network elements in the wireless network to implement the determined modifications to the operation of the wireless network. 2. The method of claim 1, wherein the parameters include parameters measuring bandwidth, latency, or jitter of the traffic flows. 3. The method of claim 2, wherein the information relating to the state of the wireless network includes information describing radio resource usage at a base station of the wireless network. 4. The method of claim 3, wherein the information describing the radio resource usage at the base station of the wireless network quantifies the radio resource usage at the base station on a per Quality of Service (QoS) Class of Identifier (QCI) basis. 5. The method of claim 3, wherein the modifications to the operation of the wireless network include:
modifications to reduce a bit rate of a set of traffic flows, of the traffic flows being used by the applications, that pass through the base station and that are associated with video streaming applications. 6. The method of claim 1, wherein the determination of the modifications to the operation of the wireless network includes:
determining the modifications based on comparisons of the values for the parameters associated with the applications to threshold values that define desired parameter values associated with the applications. 7. The method of claim 1, further comprising:
receiving a message, from a base station in the wireless network, indicating that a radio resource usage level of the base station is above a threshold value; wherein the determination of the modifications to the operation of the wireless network further include:
determining the modifications to reduce the radio resource usage level of the base station. 8. The method of claim 7, wherein the modifications to the operation of the wireless network are selected to affect traffic flows of particular applications being executed by the mobile devices. 9. A device comprising:
a memory to store instructions; and at least one processor to execute the instructions stored by the memory to:
receive, from mobile devices attached to a wireless network, values for parameters associated with applications that are executed by the mobile devices, the parameters defining performance of traffic flows being used by the applications;
receive, from network elements in the wireless network, information relating to a state of the wireless network; and
determine, based on the values for the parameters associated with the applications and based on the information relating to the state of the wireless network, modifications to an operation of the wireless network to optimize transmission of the traffic flows in the wireless network with respect to the parameters; and
control the network elements in the wireless network to implement the determined modifications to the operation of the wireless network. 10. The device of claim 9, wherein the parameters include parameters measuring bandwidth, latency, or jitter of the traffic flows. 11. The device of claim 10, wherein the information relating to the state of the wireless network includes information describing radio resource usage at a base station of the wireless network. 12. The device of claim 11, wherein the information describing the radio resource usage at the base station of the wireless network quantifies the radio resource usage at the base station on a per Quality of Service (QoS) Class of Identifier (QCI) basis. 13. The device of claim 11, wherein the modifications to the operation of the wireless network include:
modifications to reduce a bit rate of a set of traffic flows, of the traffic flows being used by the applications, that pass through the base station and that are associated with video streaming applications. 14. The device of claim 9, wherein the at least one processor, when determining the modifications to the operation of the wireless network, further executes instructions in the memory to:
determine the modifications based on comparisons of the values for the parameters associated with the applications to threshold values that define desired parameter values associated with the applications. 15. The device of claim 9, wherein the at least one processor is further to:
receive a message, from a base station in the wireless network, indicating that a radio resource usage level of the base station is above a threshold value; wherein the determination of the modifications to the operation of the wireless network further include:
determining the modifications to reduce the radio resource usage level of the base station. 16. The device of claim 15, wherein the modifications to the operation of the wireless network are selected to affect traffic flows of particular applications being executed by the mobile devices. 17. A method comprising:
monitoring, by a mobile device connected to a wireless network, traffic flows associated with applications executing at the mobile device, the monitoring including:
determining values for parameters, corresponding to the traffic flows, that relate to a quality of the traffic flows in the wireless network;
storing, by the mobile device, desired values for the parameters; and transmitting, by the mobile device and to a server, based on one or more of the determined values for the parameters falling below corresponding desired values for the parameters, the determined values for the parameters. 18. The method of claim 17, wherein the desired values of the parameters are stored on a per-application basis. 19. The method of claim 17, wherein the parameters include parameters relating to bandwidth, latency, or jitter of the traffic flows. 20. The method of claim 17, wherein the wireless network includes a Long Term Evolution (LTE) based network. | 2,600 |
9,742 | 9,742 | 15,158,802 | 2,612 | A system for displaying information related to a flight of an aircraft and an associated method are provided. The display system comprises a dynamic synthesis image generating module, configured to generate at least two successive transition synthesis images between an image according to a first type of perspective and an image according to a second type of perspective, or between an image according to a second type of perspective and an image according to a first type of perspective, respectively, and to command the display thereof at successive transition moments. Each transition image is centered around an intermediate central point of interest, seen from an intermediate point of view situated at an intermediate observation distance from the intermediate central point of interest, which is an increasing function, a decreasing function, respectively, of the transition moment at which the image is displayed, and seen from an intermediate opening angle, which is a decreasing function, an increasing function, respectively, of the transition moment at which this image is displayed. | 1. A system destined to display information related to a flight of an aircraft, the system comprising:
a dynamic synthesis image generator configured to dynamically generate synthesis images, each synthesis image comprising a depiction of the environment situated in a vicinity of a trajectory of the aircraft, the dynamic synthesis image generator being configured to generate at least two successive three-dimensional transitional synthesis images between a three-dimensional synthesis image according to a first type of perspective and a synthesis image according to a second type of perspective, or between a synthesis image according to a second type of perspective and a three-dimensional synthesis image according to a first type of perspective, respectively, and to command the successive display of the three-dimensional transition synthesis images by a display at successive transition moments, each of the three-dimensional transition synthesis images being depicted according the first type of perspective, centered around an intermediate central point of interest, seen from an intermediate point of view located at an intermediate viewing distance from the intermediate central point of interest and seen from an intermediate opening angle, the intermediate observation distance of each three-dimensional transition synthesis image being an increasing function, or a decreasing function, respectively, of the transition moment at which that three-dimensional transition synthesis image is displayed, and the intermediate opening angle of each three-dimensional transition synthesis image being a decreasing function, or an increasing function, respectively, of the transition moment at which the three-dimensional transition synthesis image is displayed. 2. The system according to claim 1 wherein there exists at least one first and one second successive transition moments, the second transition moment being after the first transition moment, such that:
the intermediate observation distance of a first three-dimensional transition synthesis image destined to be displayed at the first transition moment is strictly greater, or strictly less, respectively, than the intermediate observation distance of a second three-dimensional transition synthesis image destined to be displayed at the second transition moment, and
the intermediate opening angle of the first three-dimensional transition synthesis image is strictly smaller, or strictly larger, respectively, than the intermediate opening angle of the second three-dimensional transition synthesis image. 3. The system according to claim 1 wherein the intermediate observation distance of each three-dimensional transition synthesis image is a strictly increasing function, or a strictly decreasing function, respectively, of the transition moment at which that three-dimensional transition synthesis image is displayed. 4. The system according to claim 1 wherein the dynamic synthesis image generator is configured to determine the observation distance of each three-dimensional transition synthesis image according to a nonlinear function of the transition moment at which the three-dimensional transition synthesis image is displayed. 5. The system according to claim 4 wherein the nonlinear function is a convex function. 6. The system according to claim 1 wherein the opening angle of each three-dimensional transition synthesis image is a strictly decreasing function, or a strictly increasing function, respectively, of the transition moment at which that three-dimensional transition synthesis image is displayed. 7. The system according to claim 1 wherein, the synthesis image according to the first perspective type being centered around a given central point of interest, seen from a point of view situated at a given observation distance from the central point of interest, and seen from a given opening angle, the opening angle of each three-dimensional transition synthesis image is strictly larger than the opening angle of the synthesis image according to the first perspective type and the observation distance of each three-dimensional transition synthesis image is strictly greater than the observation distance of the synthesis image according to the first perspective type. 8. The system according to claim 7 wherein the dynamic synthesis image generator is configured to determine the opening angle and the observation distance of each three-dimensional transition synthesis image as a function of the opening angle and the observation distance of the synthesis image according to the first type of perspective. 9. The system according to claim 8 wherein the dynamic synthesis image generator is configured to determine the opening angle and the observation distance of each three-dimensional transition synthesis image as a function of the opening angle and the observation distance of the synthesis image according to the first type of perspective such that at least one dimension of the zone depicted by each three-dimensional transition synthesis image is comprised in a predetermined bounded interval around the corresponding dimension of the zone depicted by the synthesis image according to the first type of perspective. 10. The system according to claim 1 wherein the dynamic synthesis image generator is configured to:
command the display of the synthesis image according to the first type of perspective at an initial moment;
command the successive display of the successive three-dimensional transition synthesis images between the synthesis image according to the first type of perspective and the synthesis image according to the second type of perspective at the successive transition moments, the transition moments being after the initial moment; and
command the display of the synthesis image according to the second type of perspective at a final moment after the transition moments. 11. The system according to claim 1 wherein the dynamic synthesis image generator is configured to:
command the display of the synthesis image according to the second type of perspective at an initial moment,
command the successive display of the successive three-dimensional transition synthesis images between the synthesis image according to the second type of perspective and the synthesis image according to the first type of perspective at the successive transition moments, the transition moments being after the initial moment, and
command the display of the synthesis image according to the first type of perspective at a final moment after the transition moments. 12. The system according to claim 1 wherein the dynamic synthesis image generator is configured to assign a depth attribute to each pixel of a synthesis image located in a zone with a predetermined depth of the three-dimensional transition synthesis images, with the exception of pixels situated outside the predetermined depth zone, the predetermined depth zone being defined at least by a predetermined maximum altitude. 13. A method for displaying information related to a flight of an aircraft, the method comprising:
generating a synthesis image according to a first type of perspective; generating a synthesis image according to a second type of perspective; generating at least two successive three-dimensional transition synthesis images between the synthesis image according to the first type of perspective and the synthesis image according to the second type of perspective, or between the synthesis image according to the second type of perspective and the synthesis image according to the first type of perspective, respectively; and successively displaying of the three-dimensional transition synthesis images by a display at a plurality of successive transition moments, each of the synthesis images comprising a depiction of the environment situated in the vicinity of a trajectory of the aircraft, each of the three-dimensional transition synthesis images being centered around an intermediate central point of interest, seen from an intermediate point of view located at an intermediate viewing distance from the intermediate central point of interest and seen from an intermediate opening angle, the intermediate observation distance of each three-dimensional transition synthesis image being an increasing function, or a decreasing function, respectively, of the transition moment at which the three-dimensional transition synthesis image is displayed, and the intermediate opening angle of each three-dimensional transition synthesis image being a decreasing function, or an increasing function, respectively, of the transition moment at which the three-dimensional transition synthesis image is displayed. 14. The method according to claim 13 further comprising:
displaying the synthesis image according to the first type of perspective at an initial moment before the transition moments; and
displaying the synthesis image according to the second type of perspective at a final moment after the transition moments. 15. The method according to claim 13 further comprising:
displaying the synthesis image according to the second type of perspective at an initial moment before the transition moments;
displaying the synthesis image according to the first type of perspective at a final moment after the transition moments. | A system for displaying information related to a flight of an aircraft and an associated method are provided. The display system comprises a dynamic synthesis image generating module, configured to generate at least two successive transition synthesis images between an image according to a first type of perspective and an image according to a second type of perspective, or between an image according to a second type of perspective and an image according to a first type of perspective, respectively, and to command the display thereof at successive transition moments. Each transition image is centered around an intermediate central point of interest, seen from an intermediate point of view situated at an intermediate observation distance from the intermediate central point of interest, which is an increasing function, a decreasing function, respectively, of the transition moment at which the image is displayed, and seen from an intermediate opening angle, which is a decreasing function, an increasing function, respectively, of the transition moment at which this image is displayed.1. A system destined to display information related to a flight of an aircraft, the system comprising:
a dynamic synthesis image generator configured to dynamically generate synthesis images, each synthesis image comprising a depiction of the environment situated in a vicinity of a trajectory of the aircraft, the dynamic synthesis image generator being configured to generate at least two successive three-dimensional transitional synthesis images between a three-dimensional synthesis image according to a first type of perspective and a synthesis image according to a second type of perspective, or between a synthesis image according to a second type of perspective and a three-dimensional synthesis image according to a first type of perspective, respectively, and to command the successive display of the three-dimensional transition synthesis images by a display at successive transition moments, each of the three-dimensional transition synthesis images being depicted according the first type of perspective, centered around an intermediate central point of interest, seen from an intermediate point of view located at an intermediate viewing distance from the intermediate central point of interest and seen from an intermediate opening angle, the intermediate observation distance of each three-dimensional transition synthesis image being an increasing function, or a decreasing function, respectively, of the transition moment at which that three-dimensional transition synthesis image is displayed, and the intermediate opening angle of each three-dimensional transition synthesis image being a decreasing function, or an increasing function, respectively, of the transition moment at which the three-dimensional transition synthesis image is displayed. 2. The system according to claim 1 wherein there exists at least one first and one second successive transition moments, the second transition moment being after the first transition moment, such that:
the intermediate observation distance of a first three-dimensional transition synthesis image destined to be displayed at the first transition moment is strictly greater, or strictly less, respectively, than the intermediate observation distance of a second three-dimensional transition synthesis image destined to be displayed at the second transition moment, and
the intermediate opening angle of the first three-dimensional transition synthesis image is strictly smaller, or strictly larger, respectively, than the intermediate opening angle of the second three-dimensional transition synthesis image. 3. The system according to claim 1 wherein the intermediate observation distance of each three-dimensional transition synthesis image is a strictly increasing function, or a strictly decreasing function, respectively, of the transition moment at which that three-dimensional transition synthesis image is displayed. 4. The system according to claim 1 wherein the dynamic synthesis image generator is configured to determine the observation distance of each three-dimensional transition synthesis image according to a nonlinear function of the transition moment at which the three-dimensional transition synthesis image is displayed. 5. The system according to claim 4 wherein the nonlinear function is a convex function. 6. The system according to claim 1 wherein the opening angle of each three-dimensional transition synthesis image is a strictly decreasing function, or a strictly increasing function, respectively, of the transition moment at which that three-dimensional transition synthesis image is displayed. 7. The system according to claim 1 wherein, the synthesis image according to the first perspective type being centered around a given central point of interest, seen from a point of view situated at a given observation distance from the central point of interest, and seen from a given opening angle, the opening angle of each three-dimensional transition synthesis image is strictly larger than the opening angle of the synthesis image according to the first perspective type and the observation distance of each three-dimensional transition synthesis image is strictly greater than the observation distance of the synthesis image according to the first perspective type. 8. The system according to claim 7 wherein the dynamic synthesis image generator is configured to determine the opening angle and the observation distance of each three-dimensional transition synthesis image as a function of the opening angle and the observation distance of the synthesis image according to the first type of perspective. 9. The system according to claim 8 wherein the dynamic synthesis image generator is configured to determine the opening angle and the observation distance of each three-dimensional transition synthesis image as a function of the opening angle and the observation distance of the synthesis image according to the first type of perspective such that at least one dimension of the zone depicted by each three-dimensional transition synthesis image is comprised in a predetermined bounded interval around the corresponding dimension of the zone depicted by the synthesis image according to the first type of perspective. 10. The system according to claim 1 wherein the dynamic synthesis image generator is configured to:
command the display of the synthesis image according to the first type of perspective at an initial moment;
command the successive display of the successive three-dimensional transition synthesis images between the synthesis image according to the first type of perspective and the synthesis image according to the second type of perspective at the successive transition moments, the transition moments being after the initial moment; and
command the display of the synthesis image according to the second type of perspective at a final moment after the transition moments. 11. The system according to claim 1 wherein the dynamic synthesis image generator is configured to:
command the display of the synthesis image according to the second type of perspective at an initial moment,
command the successive display of the successive three-dimensional transition synthesis images between the synthesis image according to the second type of perspective and the synthesis image according to the first type of perspective at the successive transition moments, the transition moments being after the initial moment, and
command the display of the synthesis image according to the first type of perspective at a final moment after the transition moments. 12. The system according to claim 1 wherein the dynamic synthesis image generator is configured to assign a depth attribute to each pixel of a synthesis image located in a zone with a predetermined depth of the three-dimensional transition synthesis images, with the exception of pixels situated outside the predetermined depth zone, the predetermined depth zone being defined at least by a predetermined maximum altitude. 13. A method for displaying information related to a flight of an aircraft, the method comprising:
generating a synthesis image according to a first type of perspective; generating a synthesis image according to a second type of perspective; generating at least two successive three-dimensional transition synthesis images between the synthesis image according to the first type of perspective and the synthesis image according to the second type of perspective, or between the synthesis image according to the second type of perspective and the synthesis image according to the first type of perspective, respectively; and successively displaying of the three-dimensional transition synthesis images by a display at a plurality of successive transition moments, each of the synthesis images comprising a depiction of the environment situated in the vicinity of a trajectory of the aircraft, each of the three-dimensional transition synthesis images being centered around an intermediate central point of interest, seen from an intermediate point of view located at an intermediate viewing distance from the intermediate central point of interest and seen from an intermediate opening angle, the intermediate observation distance of each three-dimensional transition synthesis image being an increasing function, or a decreasing function, respectively, of the transition moment at which the three-dimensional transition synthesis image is displayed, and the intermediate opening angle of each three-dimensional transition synthesis image being a decreasing function, or an increasing function, respectively, of the transition moment at which the three-dimensional transition synthesis image is displayed. 14. The method according to claim 13 further comprising:
displaying the synthesis image according to the first type of perspective at an initial moment before the transition moments; and
displaying the synthesis image according to the second type of perspective at a final moment after the transition moments. 15. The method according to claim 13 further comprising:
displaying the synthesis image according to the second type of perspective at an initial moment before the transition moments;
displaying the synthesis image according to the first type of perspective at a final moment after the transition moments. | 2,600 |
9,743 | 9,743 | 14,573,762 | 2,625 | Techniques, devices, and systems are provided that allow for driving a device such as an OLED in various pulsed modes in which a momentary luminance greater than an apparent luminance at which the OLED is to be driven is used. The use of one or more pulsed modes allows for the lifetime of the OLED to be extended and reduces image sticking. Pulsed modes are also provided that allow for color tuning of the device by activating different portions of one or more emissive areas of the device. | 1. A method of operating an OLED display device, the method comprising:
receiving an input signal indicating an apparent luminance to be generated by at least one OLED in the display during a first frame time; providing a first drive signal to the at least one OLED, the first drive signal comprising a waveform specifying an output for the at least one OLED during the first frame time, wherein the first drive signal produces a momentary luminance greater than the apparent luminance for at least a portion of the first frame time. 2. The method of claim 1, further comprising:
selecting the waveform from among a plurality of predefined waveforms. 3. The method of claim 2, wherein the plurality of predefined waveforms are stored by the device. 4. The method of claim 2, wherein the waveform is selected based upon an expected degradation of the at least one OLED. 5. The method of claim 2, wherein the waveform is selected based upon a factor selected from the group consisting of: the age of the at least one OLED, a measurement of an operating parameter of the at least one OLED, a known relationship of luminance efficacy to luminance of the at least one OLED, and a temperature of the at least one OLED. 6. The method of claim 2, wherein the waveform is selected to activate a selected region of an emissive layer within the at least one OLED. 7-11. (canceled) 12. The method of claim 1, wherein the first drive signal specifies a voltage or a current at which to drive the at least one OLED during the first frame time. 13. (canceled) 14. The method of claim 1, wherein a total integrated luminance resulting from the waveform during the first frame time is equivalent to a total integrated luminance of the apparent luminance over the first frame time. 15. The method of claim 1, wherein the first frame time is defined by a single frame of a video provided for display on the OLED display. 16-17. (canceled) 18. The method of claim 15, wherein the waveform is periodic and has a frequency greater than a frame frequency of the input signal. 19-20. (canceled) 21. The method of claim 1, wherein the first drive signal comprises a basic drive voltage applied concurrently with the waveform. 22-23. (canceled) 24. The method of claim 1, further comprising providing a second drive signal to the at least one OLED during a second frame time, wherein the second drive signal produces a momentary luminance equal to the apparent luminance. 25-28. (canceled) 29. A display device comprising:
at least one OLED; a receiver configured to receive a display signal indicating an apparent luminance for the at least one OLED during a first frame time; and a drive circuit in signal communication with the at least one OLED and configured to provide a first drive signal to the at least one OLED based upon a waveform; a processor configured to generate the waveform, wherein the waveform defines a momentary luminance during at least a portion of the first frame time that is greater than the apparent luminance. 30. The device of claim 29, wherein the at least one OLED comprises a plurality of emissive layers, each separated from an adjacent emissive layer of the plurality of emissive layers by a blocking layer. 31. The device of claim 29, wherein the at least one OLED comprises an emissive region containing at least two regions, each region configured to emit light having a peak wavelength different than the other. 32. The device of claim 29, wherein the processor is configured to generate the waveform by selecting the waveform from among a plurality of predefined waveforms. 33-43. (canceled) 44. The device of claim 29, wherein a total integrated luminance resulting from the waveform during the first frame time is equivalent to a total integrated luminance of the apparent luminance over the first frame time. 45. The device of claim 29, wherein the first frame time is defined by a single frame of a video provided for display on the OLED display. 46-53. (canceled) 54. The device of claim 29, wherein the drive circuit is further configured to provide a second drive signal to the at least one OLED during a second frame time, wherein the second drive signal produces a momentary luminance equal to the apparent luminance. 55-58. (canceled) | Techniques, devices, and systems are provided that allow for driving a device such as an OLED in various pulsed modes in which a momentary luminance greater than an apparent luminance at which the OLED is to be driven is used. The use of one or more pulsed modes allows for the lifetime of the OLED to be extended and reduces image sticking. Pulsed modes are also provided that allow for color tuning of the device by activating different portions of one or more emissive areas of the device.1. A method of operating an OLED display device, the method comprising:
receiving an input signal indicating an apparent luminance to be generated by at least one OLED in the display during a first frame time; providing a first drive signal to the at least one OLED, the first drive signal comprising a waveform specifying an output for the at least one OLED during the first frame time, wherein the first drive signal produces a momentary luminance greater than the apparent luminance for at least a portion of the first frame time. 2. The method of claim 1, further comprising:
selecting the waveform from among a plurality of predefined waveforms. 3. The method of claim 2, wherein the plurality of predefined waveforms are stored by the device. 4. The method of claim 2, wherein the waveform is selected based upon an expected degradation of the at least one OLED. 5. The method of claim 2, wherein the waveform is selected based upon a factor selected from the group consisting of: the age of the at least one OLED, a measurement of an operating parameter of the at least one OLED, a known relationship of luminance efficacy to luminance of the at least one OLED, and a temperature of the at least one OLED. 6. The method of claim 2, wherein the waveform is selected to activate a selected region of an emissive layer within the at least one OLED. 7-11. (canceled) 12. The method of claim 1, wherein the first drive signal specifies a voltage or a current at which to drive the at least one OLED during the first frame time. 13. (canceled) 14. The method of claim 1, wherein a total integrated luminance resulting from the waveform during the first frame time is equivalent to a total integrated luminance of the apparent luminance over the first frame time. 15. The method of claim 1, wherein the first frame time is defined by a single frame of a video provided for display on the OLED display. 16-17. (canceled) 18. The method of claim 15, wherein the waveform is periodic and has a frequency greater than a frame frequency of the input signal. 19-20. (canceled) 21. The method of claim 1, wherein the first drive signal comprises a basic drive voltage applied concurrently with the waveform. 22-23. (canceled) 24. The method of claim 1, further comprising providing a second drive signal to the at least one OLED during a second frame time, wherein the second drive signal produces a momentary luminance equal to the apparent luminance. 25-28. (canceled) 29. A display device comprising:
at least one OLED; a receiver configured to receive a display signal indicating an apparent luminance for the at least one OLED during a first frame time; and a drive circuit in signal communication with the at least one OLED and configured to provide a first drive signal to the at least one OLED based upon a waveform; a processor configured to generate the waveform, wherein the waveform defines a momentary luminance during at least a portion of the first frame time that is greater than the apparent luminance. 30. The device of claim 29, wherein the at least one OLED comprises a plurality of emissive layers, each separated from an adjacent emissive layer of the plurality of emissive layers by a blocking layer. 31. The device of claim 29, wherein the at least one OLED comprises an emissive region containing at least two regions, each region configured to emit light having a peak wavelength different than the other. 32. The device of claim 29, wherein the processor is configured to generate the waveform by selecting the waveform from among a plurality of predefined waveforms. 33-43. (canceled) 44. The device of claim 29, wherein a total integrated luminance resulting from the waveform during the first frame time is equivalent to a total integrated luminance of the apparent luminance over the first frame time. 45. The device of claim 29, wherein the first frame time is defined by a single frame of a video provided for display on the OLED display. 46-53. (canceled) 54. The device of claim 29, wherein the drive circuit is further configured to provide a second drive signal to the at least one OLED during a second frame time, wherein the second drive signal produces a momentary luminance equal to the apparent luminance. 55-58. (canceled) | 2,600 |
9,744 | 9,744 | 14,291,404 | 2,647 | One example of determining an estimated time of arrival (ETA) based on calibrated distance includes a method implemented by a processor included in a mobile device to be carried by a user. An estimated distance between a starting location and an ending location is received. A calibration factor based on a location of the mobile device on the user's body and a movement pace of the user is determined. The estimated distance between the starting location and the ending location is modified based, in part, on the determined calibration factor resulting in a modified estimated distance. An estimated time to arrive (ETA) at the ending location is determined based, in part, on the modified estimated distance. | 1. A method comprising:
receiving, by a processor included in a mobile device to be carried by a user, an estimated distance between a starting location and an ending location; determining, by the processor, a calibration factor based on a location of the mobile device on the user's body and a movement pace of the user; modifying, by the processor, the estimated distance between the starting location and the ending location based, in part, on the determined calibration factor resulting in a modified estimated distance; and determining, by the processor, an estimated time to arrive (ETA) at the ending location based, in part, on the modified estimated distance. 2. The method of claim 1, wherein determining the calibration factor comprises identifying the calibration factor from a computer-readable storage medium that stores a plurality of calibration factors, each calibration factor representing a respective pair including an on-body location of the mobile device on the user's body and a movement pace of the user. 3. The method of claim 2, wherein the calibration factor is a default calibration factor, and wherein the method further comprises determining an updated calibration factor based, in part, on a past timestamp at which the calibration factor was determined. 4. The method of claim 2, wherein the plurality of calibration factors are stored in a database table, wherein a first dimension of the database table represents a plurality of on-body locations and the second dimension of the database table represents a plurality of movement paces. 5. The method of claim 3, wherein the plurality of on-body locations comprises an upper arm, a hand, a hip, and a leg. 6. The method of claim 3, wherein the plurality of movement paces comprises a slow walk, a brisk walk, a run, and a jog. 7. The method of claim 1, wherein receiving the estimated distance between the starting location and the ending location comprises determining the estimated distance based, in part, on a stride length of the user and a number of steps taken by the user during a specified time interval. 8. The method of claim 1, wherein modifying the estimated distance between the starting location and the ending location based, in part, on the determined calibration factor resulting in a modified estimated distance comprises multiplying the estimated distance by the determined calibration factor. 9. The method of claim 8, wherein determining the estimated time to arrive at the ending location based, in part, on the modified estimated distance comprises:
receiving a movement speed of the user; and dividing the modified estimated distance by the movement speed. 10. The method of claim 9, wherein receiving the movement speed comprises determining the movement speed based, in part, on a stride length of the user and a number of steps taken by the user during a specified time interval. 11. The method of claim 1, wherein determining a calibration factor representing an on-body location of the mobile device on the user's body and a movement pace of the user comprises:
determining a first distance between two locations based on a plurality of Global Navigation Satellite System (GNSS) coordinates identified between the two locations; determining a second distance between the two locations based, in part, on the location of the mobile device on the user's body and the movement pace; and dividing the second distance by the first distance. 12. A system comprising:
a processor included in a mobile device to be carried by a user; and a computer-readable medium storing instructions executable by the processor to perform operations comprising:
determining an estimated distance from a current location to an ending location;
modifying the estimated distance based, in part, on a location of the mobile device on the user's body and a movement pace of the user resulting in a modified estimated distance; and
determining an estimated time to arrivel (ETA) at the ending location based, in part, on the modified estimated distance. 13. The system of claim 12, wherein modifying the estimated distance based, in part, on the location of the mobile device on the user's body and the movement pace of the user resulting in the modified estimated distance comprises:
identifying a calibration factor representing a pair including the location of the mobile device on the user's body and the movement pace of the user from a computer-readable storage medium that stores a plurality of calibration factors, each calibration factor representing a respective pair of on-body location of the mobile device and a movement pace of the user; and multiplying the estimated distance by the identified calibration factor. 14. The system of claim 13, wherein the operations further comprise determining the calibration factor representing a location of the mobile device on the user's body and a movement pace of the user by:
determining a first distance between two locations based on a plurality of Global Navigation Satellite System (GNSS) coordinates identified between the two locations; determining a second distance between the two locations based, in part, on the on-body location of the mobile device on the user's body and the movement pace; and dividing the second distance by the first distance. 15. The system of claim 13, wherein the plurality of calibration factors are stored in a database table, wherein a first dimension of the database table represents a plurality of on-body body locations and the second dimension of the database table represents a plurality of movement paces, wherein the plurality of on-body locations comprises an upper arm, a hand, a hip, and a leg, and wherein the plurality of movement paces comprises a slow walk, a brisk walk, a run, and a jog. 16. The system of claim 12, wherein determining the estimated distance from the current location to the ending location comprises determining the estimated distance based, in part, on a stride length of the user and a number of steps taken by the user during a specified time interval. 17. A non-transitory computer-readable medium storing instructions executable by a processor included in a mobile device to be carried by a person, the instructions executable by the processor to perform operations comprising:
determining an estimated distance from a current location to an ending location; and modifying the estimated distance based, in part, on a location of the mobile device on the user's body and a movement pace of the user resulting in a modified estimated distance. 18. The medium of claim 1, wherein the operations further comprise determining an estimated time to arrivel (ETA) at the ending location based, in part, on the modified estimated distance. 19. The medium of claim 17, wherein modifying the estimated distance based, in part, on the location of the mobile device on the user's body and the movement pace of the user resulting in the modified estimated distance comprises:
identifying a calibration factor representing a pair including the location of the mobile device on the user's body and the movement pace of the person from a computer-readable storage medium that stores a plurality of calibration factors, each calibration factor representing a respective on-body location of the mobile device on the user's body and a respective movement pace of the user; and multiplying the estimated distance by the identified calibration factor. 20. The medium of claim 19, wherein the operations further comprise determining the calibration factor representing a location of the mobile device on the user's body and a movement pace of the user by:
determining a first distance between two locations based on a plurality of Global Navigation Satellite System (GNSS) coordinates identified between the two locations; determining a second distance between the two locations based, in part, on the on-body location of the mobile device on the user's body and the movement pace; and dividing the second distance by the first distance. | One example of determining an estimated time of arrival (ETA) based on calibrated distance includes a method implemented by a processor included in a mobile device to be carried by a user. An estimated distance between a starting location and an ending location is received. A calibration factor based on a location of the mobile device on the user's body and a movement pace of the user is determined. The estimated distance between the starting location and the ending location is modified based, in part, on the determined calibration factor resulting in a modified estimated distance. An estimated time to arrive (ETA) at the ending location is determined based, in part, on the modified estimated distance.1. A method comprising:
receiving, by a processor included in a mobile device to be carried by a user, an estimated distance between a starting location and an ending location; determining, by the processor, a calibration factor based on a location of the mobile device on the user's body and a movement pace of the user; modifying, by the processor, the estimated distance between the starting location and the ending location based, in part, on the determined calibration factor resulting in a modified estimated distance; and determining, by the processor, an estimated time to arrive (ETA) at the ending location based, in part, on the modified estimated distance. 2. The method of claim 1, wherein determining the calibration factor comprises identifying the calibration factor from a computer-readable storage medium that stores a plurality of calibration factors, each calibration factor representing a respective pair including an on-body location of the mobile device on the user's body and a movement pace of the user. 3. The method of claim 2, wherein the calibration factor is a default calibration factor, and wherein the method further comprises determining an updated calibration factor based, in part, on a past timestamp at which the calibration factor was determined. 4. The method of claim 2, wherein the plurality of calibration factors are stored in a database table, wherein a first dimension of the database table represents a plurality of on-body locations and the second dimension of the database table represents a plurality of movement paces. 5. The method of claim 3, wherein the plurality of on-body locations comprises an upper arm, a hand, a hip, and a leg. 6. The method of claim 3, wherein the plurality of movement paces comprises a slow walk, a brisk walk, a run, and a jog. 7. The method of claim 1, wherein receiving the estimated distance between the starting location and the ending location comprises determining the estimated distance based, in part, on a stride length of the user and a number of steps taken by the user during a specified time interval. 8. The method of claim 1, wherein modifying the estimated distance between the starting location and the ending location based, in part, on the determined calibration factor resulting in a modified estimated distance comprises multiplying the estimated distance by the determined calibration factor. 9. The method of claim 8, wherein determining the estimated time to arrive at the ending location based, in part, on the modified estimated distance comprises:
receiving a movement speed of the user; and dividing the modified estimated distance by the movement speed. 10. The method of claim 9, wherein receiving the movement speed comprises determining the movement speed based, in part, on a stride length of the user and a number of steps taken by the user during a specified time interval. 11. The method of claim 1, wherein determining a calibration factor representing an on-body location of the mobile device on the user's body and a movement pace of the user comprises:
determining a first distance between two locations based on a plurality of Global Navigation Satellite System (GNSS) coordinates identified between the two locations; determining a second distance between the two locations based, in part, on the location of the mobile device on the user's body and the movement pace; and dividing the second distance by the first distance. 12. A system comprising:
a processor included in a mobile device to be carried by a user; and a computer-readable medium storing instructions executable by the processor to perform operations comprising:
determining an estimated distance from a current location to an ending location;
modifying the estimated distance based, in part, on a location of the mobile device on the user's body and a movement pace of the user resulting in a modified estimated distance; and
determining an estimated time to arrivel (ETA) at the ending location based, in part, on the modified estimated distance. 13. The system of claim 12, wherein modifying the estimated distance based, in part, on the location of the mobile device on the user's body and the movement pace of the user resulting in the modified estimated distance comprises:
identifying a calibration factor representing a pair including the location of the mobile device on the user's body and the movement pace of the user from a computer-readable storage medium that stores a plurality of calibration factors, each calibration factor representing a respective pair of on-body location of the mobile device and a movement pace of the user; and multiplying the estimated distance by the identified calibration factor. 14. The system of claim 13, wherein the operations further comprise determining the calibration factor representing a location of the mobile device on the user's body and a movement pace of the user by:
determining a first distance between two locations based on a plurality of Global Navigation Satellite System (GNSS) coordinates identified between the two locations; determining a second distance between the two locations based, in part, on the on-body location of the mobile device on the user's body and the movement pace; and dividing the second distance by the first distance. 15. The system of claim 13, wherein the plurality of calibration factors are stored in a database table, wherein a first dimension of the database table represents a plurality of on-body body locations and the second dimension of the database table represents a plurality of movement paces, wherein the plurality of on-body locations comprises an upper arm, a hand, a hip, and a leg, and wherein the plurality of movement paces comprises a slow walk, a brisk walk, a run, and a jog. 16. The system of claim 12, wherein determining the estimated distance from the current location to the ending location comprises determining the estimated distance based, in part, on a stride length of the user and a number of steps taken by the user during a specified time interval. 17. A non-transitory computer-readable medium storing instructions executable by a processor included in a mobile device to be carried by a person, the instructions executable by the processor to perform operations comprising:
determining an estimated distance from a current location to an ending location; and modifying the estimated distance based, in part, on a location of the mobile device on the user's body and a movement pace of the user resulting in a modified estimated distance. 18. The medium of claim 1, wherein the operations further comprise determining an estimated time to arrivel (ETA) at the ending location based, in part, on the modified estimated distance. 19. The medium of claim 17, wherein modifying the estimated distance based, in part, on the location of the mobile device on the user's body and the movement pace of the user resulting in the modified estimated distance comprises:
identifying a calibration factor representing a pair including the location of the mobile device on the user's body and the movement pace of the person from a computer-readable storage medium that stores a plurality of calibration factors, each calibration factor representing a respective on-body location of the mobile device on the user's body and a respective movement pace of the user; and multiplying the estimated distance by the identified calibration factor. 20. The medium of claim 19, wherein the operations further comprise determining the calibration factor representing a location of the mobile device on the user's body and a movement pace of the user by:
determining a first distance between two locations based on a plurality of Global Navigation Satellite System (GNSS) coordinates identified between the two locations; determining a second distance between the two locations based, in part, on the on-body location of the mobile device on the user's body and the movement pace; and dividing the second distance by the first distance. | 2,600 |
9,745 | 9,745 | 14,604,563 | 2,618 | A method for stereoscopically presenting visual content is disclosed. The method comprises identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed. The method also comprises rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method. Further, the method comprises rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives. Additionally, the method comprises merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame. Finally, the method comprises displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer. | 1. A method for stereoscopically presenting visual content, comprising:
identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed; rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method; rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives; and merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame; displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer. 2. The method of claim 1, wherein the first stereoscopic rendering method is a depth-image based rendering method. 3. The method of claim 2, wherein the second stereoscopic rendering method is a 3D vision method, wherein the two different perspectives correspond to replicated draw calls for left and right eyes. 4. The method of claim 1, where the first and second types of content are distinguished based on whether to include depth-blended elements. 5. The method of claim 4, where elements involving transparency effects are associated with the second type of content. 6. The method of claim 5, wherein HUD elements are associated with the second type of content. 7. The method of claim 1, where the identifying and distinguishing are performed in real time during stereoscopic rendering. 8. A method for generating a stereoscopic representation of a frame including content of a first type and content of a second type, comprising:
using depth-image based rendering to stereoscopically represent the content of the first type; using 3D-vision rendering to stereoscopically represent the content of the second type; and merging outputs of the depth-image based rendering and the 3D-vision rendering to produce a left and right frame, wherein the left and right frames are presented in a merged fashion so that both types of content are stereoscopically perceivable by a user. 9. The method of claim 8, wherein the content of the second type comprises a HUD element. 10. The method of claim 8, wherein the content of the second type is a transparent element. 11. The method of claim 8, wherein the content of the second type is an object in a foreground of an image. 12. The method of claim 8, wherein the using depth-image based rendering further comprises:
performing a plurality of operations to stereoscopically represent the content of the first type selected from the group consisting of: depth pre-pass, shadow map pass, opaque object pass, transparent object pass and post-process pass. 13. The method of claim 8, wherein the depth-image based rendering on the content of the first type is performed on a majority of the frame and is a faster image processing compared to the 3D-vision. 14. A system for stereoscopically presenting visual content, comprising, the system comprising:
a memory storing information related to the visual content; a GPU coupled to the memory, the processor operable to implement the method of stereoscopically presenting visual content, the method comprising:
identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed;
rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method;
rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives; and
merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame;
displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer. 15. The method of claim 14, wherein the first stereoscopic rendering method is a depth-image based rendering method. 16. The method of claim 15, wherein the second stereoscopic rendering method is a 3D vision method, wherein the two different perspectives correspond to replicated draw calls for left and right eyes. 17. The method of claim 14, where the first and second types of content are distinguished based on whether to include depth-blended elements. 18. The method of claim 17, where elements involving transparency effects are associated with the second type of content. 19. The method of claim 18, wherein HUD elements are associated with the second type of content. 20. The method of claim 14, where the identifying and distinguishing are performed in real time during stereoscopic rendering. | A method for stereoscopically presenting visual content is disclosed. The method comprises identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed. The method also comprises rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method. Further, the method comprises rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives. Additionally, the method comprises merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame. Finally, the method comprises displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer.1. A method for stereoscopically presenting visual content, comprising:
identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed; rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method; rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives; and merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame; displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer. 2. The method of claim 1, wherein the first stereoscopic rendering method is a depth-image based rendering method. 3. The method of claim 2, wherein the second stereoscopic rendering method is a 3D vision method, wherein the two different perspectives correspond to replicated draw calls for left and right eyes. 4. The method of claim 1, where the first and second types of content are distinguished based on whether to include depth-blended elements. 5. The method of claim 4, where elements involving transparency effects are associated with the second type of content. 6. The method of claim 5, wherein HUD elements are associated with the second type of content. 7. The method of claim 1, where the identifying and distinguishing are performed in real time during stereoscopic rendering. 8. A method for generating a stereoscopic representation of a frame including content of a first type and content of a second type, comprising:
using depth-image based rendering to stereoscopically represent the content of the first type; using 3D-vision rendering to stereoscopically represent the content of the second type; and merging outputs of the depth-image based rendering and the 3D-vision rendering to produce a left and right frame, wherein the left and right frames are presented in a merged fashion so that both types of content are stereoscopically perceivable by a user. 9. The method of claim 8, wherein the content of the second type comprises a HUD element. 10. The method of claim 8, wherein the content of the second type is a transparent element. 11. The method of claim 8, wherein the content of the second type is an object in a foreground of an image. 12. The method of claim 8, wherein the using depth-image based rendering further comprises:
performing a plurality of operations to stereoscopically represent the content of the first type selected from the group consisting of: depth pre-pass, shadow map pass, opaque object pass, transparent object pass and post-process pass. 13. The method of claim 8, wherein the depth-image based rendering on the content of the first type is performed on a majority of the frame and is a faster image processing compared to the 3D-vision. 14. A system for stereoscopically presenting visual content, comprising, the system comprising:
a memory storing information related to the visual content; a GPU coupled to the memory, the processor operable to implement the method of stereoscopically presenting visual content, the method comprising:
identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed;
rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method;
rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives; and
merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame;
displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer. 15. The method of claim 14, wherein the first stereoscopic rendering method is a depth-image based rendering method. 16. The method of claim 15, wherein the second stereoscopic rendering method is a 3D vision method, wherein the two different perspectives correspond to replicated draw calls for left and right eyes. 17. The method of claim 14, where the first and second types of content are distinguished based on whether to include depth-blended elements. 18. The method of claim 17, where elements involving transparency effects are associated with the second type of content. 19. The method of claim 18, wherein HUD elements are associated with the second type of content. 20. The method of claim 14, where the identifying and distinguishing are performed in real time during stereoscopic rendering. | 2,600 |
9,746 | 9,746 | 13,709,741 | 2,612 | In one embodiment, a method includes receiving information associated with an image. Information regarding a viewing context for displaying the image may be received. The viewing context may comprise one or more factors, including, by way of example and not limitation: technical specifications for a display device, a physical environment of the display device, a state of the display device, and user preferences. Customization of the image may comprise modifying any aspect of the image, over the whole image, or just a portion of the image, such as, by way of example and not limitation: luminance, chrominance, resolution, etc. The image is customized with respect to the viewing context, and then the customized image is provided for display. | 1. A method comprising:
by a computing device, receiving information associated with an image; by the computing device, receiving information regarding a viewing context for displaying the image; by the computing device, customizing the image with respect to the viewing context; and by the computing device, providing the customized image for display. 2. The method of claim 1, wherein customizing the image comprises customizing metadata associated with the image, and wherein providing the customized image comprises providing the image with the customized metadata. 3. The method of claim 2, wherein the image has a high dynamic range, and wherein the image was created based on a set of images of a subject, each image in the set of images having a different exposure level. 4. The method of claim 1, wherein customizing the image comprises generating a new version of the image, and wherein providing the customized image comprises providing the new version of the image. 5. The method of claim 1, further comprising:
receiving a request to display the image on a display device, wherein the viewing context comprises technical specifications for the display device. 6. The method of claim 5, wherein the technical specifications for the display device comprise: dynamic range maximum colors, screen resolution, screen dimensions, pixel density, pixels per degree, aspect ratio, maximum viewing angle, typical viewing distance, brightness, response time, photosensor capabilities, networking capabilities, GPS capabilities, or battery life. 7. The method of claim 1, wherein the viewing context comprises information regarding a physical environment of the display device, the physical environment comprising: ambient light, location, time of day, or time of year. 8. The method of claim 1, wherein the viewing context comprises information regarding a state of the display device, the state comprising: power availability, user-configurable display settings, or network connectivity. 9. The method of claim 1, wherein the viewing context comprises information regarding user viewing preferences. 10. The method of claim 9, wherein the user viewing preferences comprise preferences of a user associated with the display device. 11. The method of claim 9, wherein the user viewing preferences comprise preferences of an expert user. 12. The method of claim 9, wherein the user viewing preferences comprise preferences of one or more social-networking connections of a user associated with the display device. 13. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
by a computing device, receive information associated with an image; by the computing device, receive information regarding a viewing context for displaying the image; by the computing device, customize the image with respect to the viewing context; and by the computing device, provide the customized image for display. 14. The media of claim 13, wherein the software operable to customize the image comprises software operable to customize metadata associated with the image, and wherein the software operable to provide the customized image comprises software operable to provide the image with the customized metadata. 15. The media of claim 14, wherein the image has a high dynamic range, and wherein the image was created based on a set of images of a subject, each image in the set of images having a different exposure level. 16. The media of claim 13, wherein the viewing context comprises information regarding user viewing preferences. 17. A system comprising:
one or more processors; and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to:
receive information associated with an image;
receive information regarding a viewing context for displaying the image;
customize the image with respect to the viewing context; and
provide the customized image for display. 18. The system of claim 17, wherein the processors being operable to customize the image comprises the processors being operable to customize metadata associated with the image, and wherein the processors being operable to provide the customized image comprises the processors being operable to provide the image with the customized metadata. 19. The system of claim 18, wherein the image has a high dynamic range, and wherein the image was created based on a set of images of a subject, each image in the set of images having a different exposure level. 20. The system of claim 17, the processors being further operable when executing the instructions to:
receive a request to display the image on a display device, wherein the viewing context comprises technical specifications for the display device. | In one embodiment, a method includes receiving information associated with an image. Information regarding a viewing context for displaying the image may be received. The viewing context may comprise one or more factors, including, by way of example and not limitation: technical specifications for a display device, a physical environment of the display device, a state of the display device, and user preferences. Customization of the image may comprise modifying any aspect of the image, over the whole image, or just a portion of the image, such as, by way of example and not limitation: luminance, chrominance, resolution, etc. The image is customized with respect to the viewing context, and then the customized image is provided for display.1. A method comprising:
by a computing device, receiving information associated with an image; by the computing device, receiving information regarding a viewing context for displaying the image; by the computing device, customizing the image with respect to the viewing context; and by the computing device, providing the customized image for display. 2. The method of claim 1, wherein customizing the image comprises customizing metadata associated with the image, and wherein providing the customized image comprises providing the image with the customized metadata. 3. The method of claim 2, wherein the image has a high dynamic range, and wherein the image was created based on a set of images of a subject, each image in the set of images having a different exposure level. 4. The method of claim 1, wherein customizing the image comprises generating a new version of the image, and wherein providing the customized image comprises providing the new version of the image. 5. The method of claim 1, further comprising:
receiving a request to display the image on a display device, wherein the viewing context comprises technical specifications for the display device. 6. The method of claim 5, wherein the technical specifications for the display device comprise: dynamic range maximum colors, screen resolution, screen dimensions, pixel density, pixels per degree, aspect ratio, maximum viewing angle, typical viewing distance, brightness, response time, photosensor capabilities, networking capabilities, GPS capabilities, or battery life. 7. The method of claim 1, wherein the viewing context comprises information regarding a physical environment of the display device, the physical environment comprising: ambient light, location, time of day, or time of year. 8. The method of claim 1, wherein the viewing context comprises information regarding a state of the display device, the state comprising: power availability, user-configurable display settings, or network connectivity. 9. The method of claim 1, wherein the viewing context comprises information regarding user viewing preferences. 10. The method of claim 9, wherein the user viewing preferences comprise preferences of a user associated with the display device. 11. The method of claim 9, wherein the user viewing preferences comprise preferences of an expert user. 12. The method of claim 9, wherein the user viewing preferences comprise preferences of one or more social-networking connections of a user associated with the display device. 13. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
by a computing device, receive information associated with an image; by the computing device, receive information regarding a viewing context for displaying the image; by the computing device, customize the image with respect to the viewing context; and by the computing device, provide the customized image for display. 14. The media of claim 13, wherein the software operable to customize the image comprises software operable to customize metadata associated with the image, and wherein the software operable to provide the customized image comprises software operable to provide the image with the customized metadata. 15. The media of claim 14, wherein the image has a high dynamic range, and wherein the image was created based on a set of images of a subject, each image in the set of images having a different exposure level. 16. The media of claim 13, wherein the viewing context comprises information regarding user viewing preferences. 17. A system comprising:
one or more processors; and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions to:
receive information associated with an image;
receive information regarding a viewing context for displaying the image;
customize the image with respect to the viewing context; and
provide the customized image for display. 18. The system of claim 17, wherein the processors being operable to customize the image comprises the processors being operable to customize metadata associated with the image, and wherein the processors being operable to provide the customized image comprises the processors being operable to provide the image with the customized metadata. 19. The system of claim 18, wherein the image has a high dynamic range, and wherein the image was created based on a set of images of a subject, each image in the set of images having a different exposure level. 20. The system of claim 17, the processors being further operable when executing the instructions to:
receive a request to display the image on a display device, wherein the viewing context comprises technical specifications for the display device. | 2,600 |
9,747 | 9,747 | 15,002,425 | 2,621 | System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated, and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically, or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures controls the various function of the controlled unit, such as a television set. | 1. (canceled) 2. A device for affecting an illumination of a display in response to face detection, the device comprising:
a single enclosure, and in said single enclosure; a display having a screen for visually presenting information; a digital camera for capturing an image in a digital data form; an image processor coupled to said digital camera for receiving the image and for detecting a human face in the image; and a second processor and firmware or software executable by said second processor, said second processor being coupled to said image processor and to said display, wherein said second processor is operative to control the illumination level of said screen in response to the detection of, or the absence of, a face in the image. 3. The device according to claim 2, wherein said display and said digital camera are mechanically fixed to said single enclosure. 4. The device according to claim 3, wherein said digital camera is positioned in a fixed position relative to said screen such that the image captured by said digital camera is of a scene facing said screen. 5. The device according to claim 2, wherein said display and said digital camera are mechanically attached to, and movable with, said single enclosure. 6. The device according to claim 5, wherein said digital camera has a center line of sight and said screen has a center line of sight that is parallel to the center line of sight of said digital camera so that the image captured is substantially of a scene facing said screen. 7. The device according to claim 2, wherein part or all of said screen is blanked in response to the detection of, or the absence of, a face in the image. 8. The device according to claim 7, wherein all of said screen is blanked in response to the detection of, or the absence of, a face in the image. 9. The device according to claim 8, wherein said screen is blanked by stopping the supply of power to said display. 10. The device according to claim 2, wherein the illumination level of said screen is controlled in response to the detection of a face in the image. 11. The device according to claim 10, further comprising a first timer coupled to said second processor for signaling a first time period, wherein the illumination level of said screen is controlled in response to the detection of a face in the image during the first time period. 12. The device according to claim 10, further comprising a first timer coupled to said second processor for signaling a first time period, wherein the illumination level of said screen is controlled in response to the absence of a face in the image after the face has been detected during the first time period. 13. The device according to claim 12, further comprising a second timer coupled to said second processor for signaling a second time period, wherein the illumination level of said screen is controlled in response to the detection of a face in the image during the first time period followed by the absence of a face in the image during the second time period. 14. The device according to claim 2, wherein the illumination level of said screen is controlled in response to the absence of a face in the image. 15. The device according to claim 14, further comprising a first timer coupled to said processor for signaling a first time period, wherein the illumination level of said screen is controlled in response to the absence of a face in the image during the first time period. 16. The device according to claim 2, wherein said device is further operative for detecting an element in the image in addition to the human face. 17. The device according to claim 16, wherein the illumination level of said screen is further controlled in response to the detection of, or the absence of, the element in the image. 18. The device according to claim 16, wherein the element is a body part. 19. The device according to claim 18, wherein the body part is a human hand. 20. The device according to claim 19, wherein the element is a hand gesture. 21. The device according to claim 20, wherein the hand gesture consists of one of: extending a single finger; extending multiple fingers; and extending all fingers of one hand. 22. The device according to claim 16, wherein said device is further operative to respond to the detection of the element in a region of the image bearing a predetermined relation to the location of the human face in the image. 23. The device according to claim 2, wherein said device is hand-held. 24. The device according to claim 2, wherein said display is a flat-panel display and is based on LCD (Liquid Crystal Display), TFT (Thin-Film Transistor), FED (Field Emission Display), CRT (Cathode Ray Tube), plasma, or LED (Light Emitting Diode) technology. 25. The device according to claim 2, wherein the device consists of, is part of, or comprises, a television set operative to receive and display television channels on said screen. 26. The device according to claim 2, wherein the device consists of, is part of, or comprises, a personal computer. 27. The device according to claim 2, wherein the device consists of, is part of, or comprises, a cellular telephone. 28. The device according to claim 2, wherein said digital camera comprises:
an optical lens for focusing received light, said lens being mechanically oriented to guide the image; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing the image and producing an analog signal representing the image; and an analog-to-digital (A/D) converter coupled to said image sensor array for converting the analog signal to a digital data representation of the image. 29. The device according to claim 28, wherein said image sensor array is based on Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor Devices(CMOS). 30. The device according to claim 2, wherein said device is further operative to respond to the location of the human face in the image. 31. The device according to claim 2, wherein said image processor consists of, or comprises, a non-transitory computer readable medium storing an image processing algorithm for detecting the human face in the captured image. 32. The device according to claim 2, further comprising a communication port coupled to said second processor for coupling to a network medium; and a transceiver coupled to said communication port for transmitting serial digital data to, or receiving serial digital data from, the network medium. 33. The device according to claim 32, wherein said transceiver is coupled to said digital camera for transmitting the image over the network medium. 34. The device according to claim 32, wherein said transceiver is coupled to said display for receiving the serial digital data from the network medium. 35. The device according to claim 32, wherein said communication port is an antenna for over-the-air radio-frequency communication, and wherein said transceiver is a wireless transceiver. 36. The device according to claim 35, wherein the over-the-air radio-frequency communication uses a license-free frequency band. 37. The device according to claim 36, wherein the license-free frequency band is an ISM band. 38. The device according to claim 37, wherein the ISM band is 5 GHz or 2.4 GHz. 39. The device according to claim 35, wherein the over-the-air radio-frequency communication takes place in a Wireless Personal Area Network (WPAN), wherein said antenna is a WPAN antenna, and wherein said wireless transceiver is a WPAN transceiver. 40. The device according to claim 39, wherein the WPAN is according to, or based on, Bluetooth, IEEE 802.15, Ultra-Wide-Band (UWB), or ZigBee. 41. The device according to claim 35, wherein the over-the-air radio-frequency communication takes place over a Wireless Local Area Network (WLAN), wherein said antenna is a WLAN antenna, and wherein said wireless transceiver is a WLAN transceiver. 42. The device according to claim 41, wherein the WLAN is according to, or based on, IEEE 802.11a, IEEE802.11b, or IEEE802.11g. 43. The device according to claim 35, wherein the over-the-air radio-frequency communication is a cellular communication, wherein said antenna is a cellular antenna, and wherein said wireless transceiver is a cellular transceiver. 44. The device according to claim 43, wherein the cellular communication is according to, or based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 45. The device according to claim 32, wherein the network medium is a cable, said communication port is a connector for connecting to the cable, and said transceiver is a wired transceiver. 46. The device according to claim 45, wherein the cable is a USB (Universal Serial Bus) cable, said connector is a USB connector, and said wired transceiver is a USB transceiver. 47. The device according to claim 45, wherein the cable is operative for use in a Local Area Network (LAN), wherein said connector is a LAN connector, and wherein said wired transceiver is a LAN transceiver. 48. The device according to claim 47, wherein the LAN is Ethernet based, and is according to, or based on, IEEE 802.3-2008 standard. 49. The device according to claim 48, wherein the cable is based on twisted-pair copper wire cables, and wherein said LAN transceiver is according to, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and wherein said LAN connector is RJ-45 type. 50. The device according to claim 45, wherein the cable is connectable to simultaneously carry a DC or AC power signal. 51. The device according to claim 50, wherein said device is further operative to output at least part of the power signal carried over the cable. 52. The device according to claim 50, wherein said device is further operative to at least in part be powered from the power signal. 53. The device according to claim 50, wherein the power signal is carried over dedicated wires in the cable, and wherein the wires are distinct from wires in the cable carrying the serial digital data. | System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated, and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically, or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures controls the various function of the controlled unit, such as a television set.1. (canceled) 2. A device for affecting an illumination of a display in response to face detection, the device comprising:
a single enclosure, and in said single enclosure; a display having a screen for visually presenting information; a digital camera for capturing an image in a digital data form; an image processor coupled to said digital camera for receiving the image and for detecting a human face in the image; and a second processor and firmware or software executable by said second processor, said second processor being coupled to said image processor and to said display, wherein said second processor is operative to control the illumination level of said screen in response to the detection of, or the absence of, a face in the image. 3. The device according to claim 2, wherein said display and said digital camera are mechanically fixed to said single enclosure. 4. The device according to claim 3, wherein said digital camera is positioned in a fixed position relative to said screen such that the image captured by said digital camera is of a scene facing said screen. 5. The device according to claim 2, wherein said display and said digital camera are mechanically attached to, and movable with, said single enclosure. 6. The device according to claim 5, wherein said digital camera has a center line of sight and said screen has a center line of sight that is parallel to the center line of sight of said digital camera so that the image captured is substantially of a scene facing said screen. 7. The device according to claim 2, wherein part or all of said screen is blanked in response to the detection of, or the absence of, a face in the image. 8. The device according to claim 7, wherein all of said screen is blanked in response to the detection of, or the absence of, a face in the image. 9. The device according to claim 8, wherein said screen is blanked by stopping the supply of power to said display. 10. The device according to claim 2, wherein the illumination level of said screen is controlled in response to the detection of a face in the image. 11. The device according to claim 10, further comprising a first timer coupled to said second processor for signaling a first time period, wherein the illumination level of said screen is controlled in response to the detection of a face in the image during the first time period. 12. The device according to claim 10, further comprising a first timer coupled to said second processor for signaling a first time period, wherein the illumination level of said screen is controlled in response to the absence of a face in the image after the face has been detected during the first time period. 13. The device according to claim 12, further comprising a second timer coupled to said second processor for signaling a second time period, wherein the illumination level of said screen is controlled in response to the detection of a face in the image during the first time period followed by the absence of a face in the image during the second time period. 14. The device according to claim 2, wherein the illumination level of said screen is controlled in response to the absence of a face in the image. 15. The device according to claim 14, further comprising a first timer coupled to said processor for signaling a first time period, wherein the illumination level of said screen is controlled in response to the absence of a face in the image during the first time period. 16. The device according to claim 2, wherein said device is further operative for detecting an element in the image in addition to the human face. 17. The device according to claim 16, wherein the illumination level of said screen is further controlled in response to the detection of, or the absence of, the element in the image. 18. The device according to claim 16, wherein the element is a body part. 19. The device according to claim 18, wherein the body part is a human hand. 20. The device according to claim 19, wherein the element is a hand gesture. 21. The device according to claim 20, wherein the hand gesture consists of one of: extending a single finger; extending multiple fingers; and extending all fingers of one hand. 22. The device according to claim 16, wherein said device is further operative to respond to the detection of the element in a region of the image bearing a predetermined relation to the location of the human face in the image. 23. The device according to claim 2, wherein said device is hand-held. 24. The device according to claim 2, wherein said display is a flat-panel display and is based on LCD (Liquid Crystal Display), TFT (Thin-Film Transistor), FED (Field Emission Display), CRT (Cathode Ray Tube), plasma, or LED (Light Emitting Diode) technology. 25. The device according to claim 2, wherein the device consists of, is part of, or comprises, a television set operative to receive and display television channels on said screen. 26. The device according to claim 2, wherein the device consists of, is part of, or comprises, a personal computer. 27. The device according to claim 2, wherein the device consists of, is part of, or comprises, a cellular telephone. 28. The device according to claim 2, wherein said digital camera comprises:
an optical lens for focusing received light, said lens being mechanically oriented to guide the image; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing the image and producing an analog signal representing the image; and an analog-to-digital (A/D) converter coupled to said image sensor array for converting the analog signal to a digital data representation of the image. 29. The device according to claim 28, wherein said image sensor array is based on Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor Devices(CMOS). 30. The device according to claim 2, wherein said device is further operative to respond to the location of the human face in the image. 31. The device according to claim 2, wherein said image processor consists of, or comprises, a non-transitory computer readable medium storing an image processing algorithm for detecting the human face in the captured image. 32. The device according to claim 2, further comprising a communication port coupled to said second processor for coupling to a network medium; and a transceiver coupled to said communication port for transmitting serial digital data to, or receiving serial digital data from, the network medium. 33. The device according to claim 32, wherein said transceiver is coupled to said digital camera for transmitting the image over the network medium. 34. The device according to claim 32, wherein said transceiver is coupled to said display for receiving the serial digital data from the network medium. 35. The device according to claim 32, wherein said communication port is an antenna for over-the-air radio-frequency communication, and wherein said transceiver is a wireless transceiver. 36. The device according to claim 35, wherein the over-the-air radio-frequency communication uses a license-free frequency band. 37. The device according to claim 36, wherein the license-free frequency band is an ISM band. 38. The device according to claim 37, wherein the ISM band is 5 GHz or 2.4 GHz. 39. The device according to claim 35, wherein the over-the-air radio-frequency communication takes place in a Wireless Personal Area Network (WPAN), wherein said antenna is a WPAN antenna, and wherein said wireless transceiver is a WPAN transceiver. 40. The device according to claim 39, wherein the WPAN is according to, or based on, Bluetooth, IEEE 802.15, Ultra-Wide-Band (UWB), or ZigBee. 41. The device according to claim 35, wherein the over-the-air radio-frequency communication takes place over a Wireless Local Area Network (WLAN), wherein said antenna is a WLAN antenna, and wherein said wireless transceiver is a WLAN transceiver. 42. The device according to claim 41, wherein the WLAN is according to, or based on, IEEE 802.11a, IEEE802.11b, or IEEE802.11g. 43. The device according to claim 35, wherein the over-the-air radio-frequency communication is a cellular communication, wherein said antenna is a cellular antenna, and wherein said wireless transceiver is a cellular transceiver. 44. The device according to claim 43, wherein the cellular communication is according to, or based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 45. The device according to claim 32, wherein the network medium is a cable, said communication port is a connector for connecting to the cable, and said transceiver is a wired transceiver. 46. The device according to claim 45, wherein the cable is a USB (Universal Serial Bus) cable, said connector is a USB connector, and said wired transceiver is a USB transceiver. 47. The device according to claim 45, wherein the cable is operative for use in a Local Area Network (LAN), wherein said connector is a LAN connector, and wherein said wired transceiver is a LAN transceiver. 48. The device according to claim 47, wherein the LAN is Ethernet based, and is according to, or based on, IEEE 802.3-2008 standard. 49. The device according to claim 48, wherein the cable is based on twisted-pair copper wire cables, and wherein said LAN transceiver is according to, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and wherein said LAN connector is RJ-45 type. 50. The device according to claim 45, wherein the cable is connectable to simultaneously carry a DC or AC power signal. 51. The device according to claim 50, wherein said device is further operative to output at least part of the power signal carried over the cable. 52. The device according to claim 50, wherein said device is further operative to at least in part be powered from the power signal. 53. The device according to claim 50, wherein the power signal is carried over dedicated wires in the cable, and wherein the wires are distinct from wires in the cable carrying the serial digital data. | 2,600 |
9,748 | 9,748 | 14,837,286 | 2,683 | A seismic node, in a seismic data acquisition system, includes a housing having an input receiving analog seismic data and an output supplying corrected series of digital sampled and dated seismic data; a single local clock located inside the housing and configured to generate an inaccurate local clock signal (CLK); a gauging circuit configured to measure a frequency drift and a phase error of the local clock signal (CLK) based on received synchronization information; an analog-to-digital converter located inside the housing and configured to process the analog seismic signal based on the local clock signal (CLK) and to provide a series of digital sampled and dated seismic data; and a correcting circuit for generating the corrected series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error of the local clock signal (CLK). | 1. A seismic data acquisition apparatus to be used in a seismic data acquisition system for collecting seismic data, the data apparatus comprising:
a housing housing only one local clock circuit, the local clock circuit generating a local clock signal (CLK); a gauging circuit located inside the housing and having a first input for receiving synchronization information from outside the housing and a second input for receiving the local clock signal (CLK), wherein the gauging circuit is configured to measure a frequency drift and a phase error of the local clock signal (CLK) based on the received synchronization information; an analog-to-digital converter located inside the housing and configured to receive the local clock signal (CLK) and an analog seismic signal, the analog-to-digital converter being configured to sample the analog seismic signal based on the local clock signal (CLK) and to provide a series of digital sampled and dated seismic data based on the local clock signal (CLK); and a correcting circuit connected to (i) the gauging circuit for receiving the frequency drift and the phase error of the local clock signal (CLK) and (ii) to the analog-to-digital converter for receiving the series of digital sampled and dated seismic data, wherein the correcting circuit generates corrected series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error of the local clock signal (CLK). 2. The apparatus of claim 1, further comprising:
a receiving antenna connected to the gauging circuit, the receiving antenna receiving in a wireless manner the synchronization information. 3. The apparatus of claim 2, wherein the synchronization information is representative of a remote reference clock signal. 4. The apparatus of claim 3, wherein the remote reference clock signal is included in a radio communication signal. 5. The apparatus of claim 2, further comprising:
a receiving circuit that includes the gauging circuit. 6. The apparatus of claim 5, wherein the receiving circuit is a radio-frequency (RF) receiver. 7. The apparatus of claim 5, wherein the receiving circuit is a global navigation satellite system (GNSS) receiver. 8. The apparatus of claim 5, wherein the local clock circuit is external to the receiving circuit. 9. The apparatus of claim 5, further comprising:
a switching circuit for turning off, at least partially, the receiving circuit once the gauging circuit has measured the frequency drift and the phase error. 10. The apparatus of claim 5, wherein the local clock circuit is internal to the receiving circuit, and the data acquisition apparatus comprises a switching circuit for turning off, at least partially, the receiving circuit without turning off the local clock circuit once the gauging circuit has measured the frequency drift and the phase error. 11. The apparatus of claim 1, wherein the analog-to-digital converter converts the analog seismic signal into digital data. 12. The apparatus of claim 1, further comprising:
seismic sensors located outside the housing, wherein the seismic sensors record the analog seismic signal. 13. The apparatus of claim 1, further comprising:
a calculation circuit connected to the gauging circuit and configured to generate a reference signal (CLKREF). 14. A seismic node in a seismic data acquisition system, the seismic node comprising:
a housing having an input and an output; the input receiving analog seismic data from one or more seismic sensors; the output supplying corrected series of digital sampled and dated seismic data; a single local clock located inside the housing and configured to generate an inaccurate local clock signal (CLK); a gauging circuit configured to measure a frequency drift and a phase error of the local clock signal (CLK) based on received synchronization information; an analog-to-digital converter located inside the housing and configured to process the analog seismic signal based on the local clock signal (CLK) and to provide a series of digital sampled and dated seismic data; and a correcting circuit for generating the corrected series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error of the local clock signal (CLK). 15. The node of claim 14, further comprising:
a receiving antenna connected to the gauging circuit, the receiving antenna receiving in a wireless manner the synchronization information from a remote reference clock signal. 16. The node of claim 14, further comprising:
a switching circuit for turning off the gauging circuit. 17. A method for transforming an analog seismic signal into digital seismic data in a seismic node, the method comprising:
receiving a radio communication signal including synchronization information representative of a remote reference clock signal; locally generating in a seismic node only a single clock signal (CLK); receiving analog seismic signals from at least one seismic sensor; digitizing the analog seismic signals to generate a series of digital sampled and dated seismic data based on the local clock signal (CLK); measuring a frequency drift and a phase error of the local clock signal (CLK) based on the synchronization information; and correcting the series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error, to generate corrected series of digital sampled and dated seismic data. 18. The method of claim 17, wherein the step of measuring is suspended from time to time while the step of correcting is performed continuously. 19. The method of claim 17, wherein the step of receiving a radio communication signal is suspended from time to time while the step of correcting is performed continuously. 20. The method of claim 17, wherein the step of locally generating only a single clock signal (CLK) is simultaneously suspended with the step of measuring or the step of receiving a radio communication signal. | A seismic node, in a seismic data acquisition system, includes a housing having an input receiving analog seismic data and an output supplying corrected series of digital sampled and dated seismic data; a single local clock located inside the housing and configured to generate an inaccurate local clock signal (CLK); a gauging circuit configured to measure a frequency drift and a phase error of the local clock signal (CLK) based on received synchronization information; an analog-to-digital converter located inside the housing and configured to process the analog seismic signal based on the local clock signal (CLK) and to provide a series of digital sampled and dated seismic data; and a correcting circuit for generating the corrected series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error of the local clock signal (CLK).1. A seismic data acquisition apparatus to be used in a seismic data acquisition system for collecting seismic data, the data apparatus comprising:
a housing housing only one local clock circuit, the local clock circuit generating a local clock signal (CLK); a gauging circuit located inside the housing and having a first input for receiving synchronization information from outside the housing and a second input for receiving the local clock signal (CLK), wherein the gauging circuit is configured to measure a frequency drift and a phase error of the local clock signal (CLK) based on the received synchronization information; an analog-to-digital converter located inside the housing and configured to receive the local clock signal (CLK) and an analog seismic signal, the analog-to-digital converter being configured to sample the analog seismic signal based on the local clock signal (CLK) and to provide a series of digital sampled and dated seismic data based on the local clock signal (CLK); and a correcting circuit connected to (i) the gauging circuit for receiving the frequency drift and the phase error of the local clock signal (CLK) and (ii) to the analog-to-digital converter for receiving the series of digital sampled and dated seismic data, wherein the correcting circuit generates corrected series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error of the local clock signal (CLK). 2. The apparatus of claim 1, further comprising:
a receiving antenna connected to the gauging circuit, the receiving antenna receiving in a wireless manner the synchronization information. 3. The apparatus of claim 2, wherein the synchronization information is representative of a remote reference clock signal. 4. The apparatus of claim 3, wherein the remote reference clock signal is included in a radio communication signal. 5. The apparatus of claim 2, further comprising:
a receiving circuit that includes the gauging circuit. 6. The apparatus of claim 5, wherein the receiving circuit is a radio-frequency (RF) receiver. 7. The apparatus of claim 5, wherein the receiving circuit is a global navigation satellite system (GNSS) receiver. 8. The apparatus of claim 5, wherein the local clock circuit is external to the receiving circuit. 9. The apparatus of claim 5, further comprising:
a switching circuit for turning off, at least partially, the receiving circuit once the gauging circuit has measured the frequency drift and the phase error. 10. The apparatus of claim 5, wherein the local clock circuit is internal to the receiving circuit, and the data acquisition apparatus comprises a switching circuit for turning off, at least partially, the receiving circuit without turning off the local clock circuit once the gauging circuit has measured the frequency drift and the phase error. 11. The apparatus of claim 1, wherein the analog-to-digital converter converts the analog seismic signal into digital data. 12. The apparatus of claim 1, further comprising:
seismic sensors located outside the housing, wherein the seismic sensors record the analog seismic signal. 13. The apparatus of claim 1, further comprising:
a calculation circuit connected to the gauging circuit and configured to generate a reference signal (CLKREF). 14. A seismic node in a seismic data acquisition system, the seismic node comprising:
a housing having an input and an output; the input receiving analog seismic data from one or more seismic sensors; the output supplying corrected series of digital sampled and dated seismic data; a single local clock located inside the housing and configured to generate an inaccurate local clock signal (CLK); a gauging circuit configured to measure a frequency drift and a phase error of the local clock signal (CLK) based on received synchronization information; an analog-to-digital converter located inside the housing and configured to process the analog seismic signal based on the local clock signal (CLK) and to provide a series of digital sampled and dated seismic data; and a correcting circuit for generating the corrected series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error of the local clock signal (CLK). 15. The node of claim 14, further comprising:
a receiving antenna connected to the gauging circuit, the receiving antenna receiving in a wireless manner the synchronization information from a remote reference clock signal. 16. The node of claim 14, further comprising:
a switching circuit for turning off the gauging circuit. 17. A method for transforming an analog seismic signal into digital seismic data in a seismic node, the method comprising:
receiving a radio communication signal including synchronization information representative of a remote reference clock signal; locally generating in a seismic node only a single clock signal (CLK); receiving analog seismic signals from at least one seismic sensor; digitizing the analog seismic signals to generate a series of digital sampled and dated seismic data based on the local clock signal (CLK); measuring a frequency drift and a phase error of the local clock signal (CLK) based on the synchronization information; and correcting the series of digital sampled and dated seismic data based at least on the measured frequency drift and phase error, to generate corrected series of digital sampled and dated seismic data. 18. The method of claim 17, wherein the step of measuring is suspended from time to time while the step of correcting is performed continuously. 19. The method of claim 17, wherein the step of receiving a radio communication signal is suspended from time to time while the step of correcting is performed continuously. 20. The method of claim 17, wherein the step of locally generating only a single clock signal (CLK) is simultaneously suspended with the step of measuring or the step of receiving a radio communication signal. | 2,600 |
9,749 | 9,749 | 15,234,375 | 2,675 | A low power sound recognition sensor is configured to receive an analog signal that may contain a signature sound. Sparse sound parameter information is extracted from the analog signal. The extracted sparse sound parameter information is processed using a speaker dependent sound signature database stored in the sound recognition sensor to identify sounds or speech contained in the analog signal. The sound signature database may include several user enrollments for a sound command each representing an entire word or multiword phrase. The extracted sparse sound parameter information may be compared to the multiple user enrolled signatures using cosine distance, Euclidean distance, correlation distance, etc., for example. | 1. A method for training a speaker dependent sound recognition sensor, the method comprising:
receiving an analog signal that contains a command sound spoken by a user of the sound recognition sensor; extracting sparse sound parameter information from the analog signal using an analog portion of the sound recognition sensor to form a user dependent sound vector representing an entire word or multiword phrase; and storing the speaker dependent sound vector in a sound signature database coupled to the sound recognition sensor, whereby the user dependent sound vector is provided to the sound recognition sensor. 2. The method of claim 1, wherein the user repeats the command sound at least three times, such that at least three user dependent sound vectors are formed by extracting sparse sound parameter information from the analog signal; and
storing the at least three speaker dependent sound vectors in the sound signature database, such that each of the at least three user dependent sound vectors are provided to the sound recognition sensor. 3. The method of claim 1, wherein the sound signature database comprises a plurality of sound signatures each representing a whole spoken word or multiword phrase. 4. The method of claim 1, wherein extracting the sparse sound parameters is performed at a sample rate of less than or equal to approximately 500 samples per second. 5. The method of claim 4, wherein the sample rate is approximately 50 samples per second for voice sounds. 6. The method of claim 1, wherein the sparse sound parameter information comprises zero crossing rates, low pass energy values or band pass energy values. 7. A method for operating a speaker dependent sound recognition sensor, the method comprising:
receiving an analog signal that may contain a trigger sound; extracting sparse sound parameter information from the analog signal using an analog portion of the sound recognition sensor; and processing the extracted sparse sound parameter information using a speaker dependent sound signature database stored in the sound recognition sensor to identify sounds or speech contained in the analog signal, wherein the sound signature database comprises a plurality of user dependent sound vectors from a single user each representing a same entire word or multiword phrase. 8. The method of claim 7, wherein processing the extracted sparse sound parameter information comprises determining a cosine distance between each of the plurality of sound vectors and a portion of the sparse sound parameter information; and
declaring a match when the cosine distance of one or more of the plurality of sound vectors is less than a threshold value. 9. The method of claim 7, wherein the sound signature database comprises a plurality of sound signatures each representing a whole spoken word or multiword phrase. 10. The method of claim 7, wherein extracting the sparse sound parameters is performed at a sample rate of less than or equal to approximately 500 samples per second. 11. The method of claim 10, wherein the sample rate is approximately 50 samples per second for voice sounds 12. The method of claim 7, wherein the sparse sound parameter information comprises zero crossing rates, low pass energy values or band pass energy values. 13. An apparatus for recognizing a sound, the apparatus comprising:
a microphone; an analog front end section comprising analog feature extraction circuitry configured to receive an analog signal from the microphone that may contain a signature sound and to extract sparse sound parameter information from the analog signal; and a digital classification section coupled to the analog front end section being configured to compare the sound parameter information to a sound signature database stored in memory coupled to the digital classification section to detect when the signature sound is received in the analog signal and to generate a match signal when a signature sound is detected, wherein the sound signature database comprises a plurality of speaker dependent sound vectors from a single speaker each representing a same entire word or multiword phrase. 14. The apparatus of claim 13, wherein the digital classification section is configured to determine a cosine distance between each of the plurality of sound vectors and a portion of the sparse sound parameter information; and
to declare a match when the cosine distance of one or more of the plurality of sound vectors is less than a threshold value. 15. The apparatus of claim 13, wherein the analog front end is configured to extract the sparse sound parameters at a sample rate of less than or equal to approximately 500 samples per second. 16. The apparatus of claim 13, wherein the analog feature extraction circuitry comprises:
a counter operable to measure a number of times the analog signal crosses a threshold value during each of a sequence of time frames to form a sequence of zero crossing (ZC) counts; and a subtractor operable to take a difference between selected pairs of ZC counts to form a sequence of differential ZC counts. 17. The apparatus of claim 13, wherein the analog feature extraction circuitry comprises one or more low pass or band pass energy filters. 18. The apparatus of claim 15, wherein the analog feature extraction circuitry comprises one or more low pass or band pass energy filters. | A low power sound recognition sensor is configured to receive an analog signal that may contain a signature sound. Sparse sound parameter information is extracted from the analog signal. The extracted sparse sound parameter information is processed using a speaker dependent sound signature database stored in the sound recognition sensor to identify sounds or speech contained in the analog signal. The sound signature database may include several user enrollments for a sound command each representing an entire word or multiword phrase. The extracted sparse sound parameter information may be compared to the multiple user enrolled signatures using cosine distance, Euclidean distance, correlation distance, etc., for example.1. A method for training a speaker dependent sound recognition sensor, the method comprising:
receiving an analog signal that contains a command sound spoken by a user of the sound recognition sensor; extracting sparse sound parameter information from the analog signal using an analog portion of the sound recognition sensor to form a user dependent sound vector representing an entire word or multiword phrase; and storing the speaker dependent sound vector in a sound signature database coupled to the sound recognition sensor, whereby the user dependent sound vector is provided to the sound recognition sensor. 2. The method of claim 1, wherein the user repeats the command sound at least three times, such that at least three user dependent sound vectors are formed by extracting sparse sound parameter information from the analog signal; and
storing the at least three speaker dependent sound vectors in the sound signature database, such that each of the at least three user dependent sound vectors are provided to the sound recognition sensor. 3. The method of claim 1, wherein the sound signature database comprises a plurality of sound signatures each representing a whole spoken word or multiword phrase. 4. The method of claim 1, wherein extracting the sparse sound parameters is performed at a sample rate of less than or equal to approximately 500 samples per second. 5. The method of claim 4, wherein the sample rate is approximately 50 samples per second for voice sounds. 6. The method of claim 1, wherein the sparse sound parameter information comprises zero crossing rates, low pass energy values or band pass energy values. 7. A method for operating a speaker dependent sound recognition sensor, the method comprising:
receiving an analog signal that may contain a trigger sound; extracting sparse sound parameter information from the analog signal using an analog portion of the sound recognition sensor; and processing the extracted sparse sound parameter information using a speaker dependent sound signature database stored in the sound recognition sensor to identify sounds or speech contained in the analog signal, wherein the sound signature database comprises a plurality of user dependent sound vectors from a single user each representing a same entire word or multiword phrase. 8. The method of claim 7, wherein processing the extracted sparse sound parameter information comprises determining a cosine distance between each of the plurality of sound vectors and a portion of the sparse sound parameter information; and
declaring a match when the cosine distance of one or more of the plurality of sound vectors is less than a threshold value. 9. The method of claim 7, wherein the sound signature database comprises a plurality of sound signatures each representing a whole spoken word or multiword phrase. 10. The method of claim 7, wherein extracting the sparse sound parameters is performed at a sample rate of less than or equal to approximately 500 samples per second. 11. The method of claim 10, wherein the sample rate is approximately 50 samples per second for voice sounds 12. The method of claim 7, wherein the sparse sound parameter information comprises zero crossing rates, low pass energy values or band pass energy values. 13. An apparatus for recognizing a sound, the apparatus comprising:
a microphone; an analog front end section comprising analog feature extraction circuitry configured to receive an analog signal from the microphone that may contain a signature sound and to extract sparse sound parameter information from the analog signal; and a digital classification section coupled to the analog front end section being configured to compare the sound parameter information to a sound signature database stored in memory coupled to the digital classification section to detect when the signature sound is received in the analog signal and to generate a match signal when a signature sound is detected, wherein the sound signature database comprises a plurality of speaker dependent sound vectors from a single speaker each representing a same entire word or multiword phrase. 14. The apparatus of claim 13, wherein the digital classification section is configured to determine a cosine distance between each of the plurality of sound vectors and a portion of the sparse sound parameter information; and
to declare a match when the cosine distance of one or more of the plurality of sound vectors is less than a threshold value. 15. The apparatus of claim 13, wherein the analog front end is configured to extract the sparse sound parameters at a sample rate of less than or equal to approximately 500 samples per second. 16. The apparatus of claim 13, wherein the analog feature extraction circuitry comprises:
a counter operable to measure a number of times the analog signal crosses a threshold value during each of a sequence of time frames to form a sequence of zero crossing (ZC) counts; and a subtractor operable to take a difference between selected pairs of ZC counts to form a sequence of differential ZC counts. 17. The apparatus of claim 13, wherein the analog feature extraction circuitry comprises one or more low pass or band pass energy filters. 18. The apparatus of claim 15, wherein the analog feature extraction circuitry comprises one or more low pass or band pass energy filters. | 2,600 |
9,750 | 9,750 | 15,049,570 | 2,683 | An accessory system for a vehicle includes a mounting element attached at an inner surface of a windshield of the vehicle, with the mounting element adapted for mounting of an accessory module thereto and demounting of the accessory module therefrom. The accessory module includes a camera, and electrical circuitry associated with the camera is disposed within the accessory module. The accessory module includes a portion that faces towards the windshield when the accessory module is mounted to the mounting element attached at the windshield. The portion includes wall structure, and the lens of the camera protrudes through the wall structure to exterior of the accessory module. The accessory module and mounting element are configured so that, with the accessory module mounted to the mounting element, the lens is facing towards the windshield and the camera has a field of view appropriate for a driver assistance system of the vehicle. | 1. An accessory system for a vehicle comprising a windshield having an outer surface that is exterior of the vehicle and an inner surface that is interior of the vehicle, with the windshield at a windshield angle relative to vertical when mounted in the vehicle, said accessory system comprising:
a mounting element attached at an inner surface of a windshield of a vehicle equipped with said accessory system; wherein said mounting element is adapted for mounting of an accessory module thereto and demounting of said accessory module therefrom; said accessory module adapted for mounting to and demounting from said mounting element; said accessory module comprising a camera, said camera comprising an imaging sensor and a lens; wherein electrical circuitry associated with said camera is disposed within said accessory module; wherein said accessory module comprises a portion that faces towards the windshield when said accessory module is mounted to said mounting element attached at the windshield; said portion comprising wall structure; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module; wherein said accessory module and said mounting element are configured so that, with said accessory module mounted to said mounting element, said lens is facing towards the windshield of the equipped vehicle and said camera has a field of view appropriate for a driver assistance system of the equipped vehicle and views forwardly through an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated; wherein a multi-pin electrical socket is located at said accessory module and connects with said circuitry disposed within said accessory module; and wherein said electrical socket is configured for electrical connection to at least one of (i) an electrical source of the equipped vehicle, (ii) a power source of the equipped vehicle and (iii) a control of the equipped vehicle. 2. The accessory system of claim 1, wherein said driver assistance system comprises an adaptive cruise control system of the equipped vehicle. 3. The accessory system of claim 2, wherein said electrical socket connects said circuitry with at least one of (i) a communication bus of the equipped vehicle and (ii) a CAN communication bus of the equipped vehicle. 4. The accessory system of claim 1, wherein said mounting element is held by an adhesive at the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button, and wherein said mounting element is at an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated. 5. The accessory system of claim 1, wherein said accessory module comprises a casing that at least partially encases said camera. 6. The accessory system of claim 5, wherein said imaging sensor is mounted on a printed circuit board that is disposed within said casing of said accessory module. 7. The accessory system of claim 6, wherein said imaging sensor comprises a CMOS photosensor array. 8. The accessory system of claim 7, wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module into a recess region of said casing. 9. The accessory system of claim 1, wherein said driver assistance system comprises a lane change assist system of the equipped vehicle. 10. The accessory system of claim 1, wherein said mounting element is bonded via an adhesive to the inner surface of the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button. 11. The accessory system of claim 10, wherein said interior rearview mirror assembly includes an electro-optic reflective element, at least two photosensors, a microprocessor and associated circuitry all commonly established on a common semiconductor element, and wherein said microprocessor is operable to detect a voltage applied to said electro-optic reflective element and to adjust the voltage applied to said electro-optic reflective element via controlling a current gating element, said current gating element controlling an amount of current shunted to ground to control the voltage applied to said electro-optic reflective element, and wherein said interior rearview mirror assembly includes an inductive power receiver that receives power from an inductive power transmitter located remotely from said interior rearview mirror assembly, and wherein said interior rearview mirror assembly includes a data transceiver that wirelessly communicates with a data transceiver located remotely from said interior rearview mirror assembly. 12. The accessory system of claim 1, wherein said electrical socket connects said circuitry with a communication bus of the equipped vehicle, and wherein said mounting element and said accessory module are adapted for mounting by snap attachment of said accessory module to said mounting element. 13. An accessory system for a vehicle comprising a windshield having an outer surface that is exterior of the vehicle and an inner surface that is interior of the vehicle, with the windshield at a windshield angle relative to vertical when mounted in the vehicle, said accessory system comprising:
a mounting element attached at an inner surface of a windshield of a vehicle equipped with said accessory system; wherein said mounting element is adapted for mounting of an accessory module thereto and demounting of said accessory module therefrom; said accessory module adapted for mounting to and demounting from said mounting element; said accessory module comprising a camera, said camera comprising an imaging sensor and a lens; wherein electrical circuitry associated with said camera is disposed within said accessory module; wherein said accessory module comprises a portion that faces towards the windshield when said accessory module is mounted to said mounting element attached at the windshield; said portion comprising wall structure; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module; wherein said accessory module and said mounting element are configured so that, with said accessory module mounted to said mounting element, said lens is facing towards the windshield of the equipped vehicle and said camera has a field of view appropriate for a driver assistance system of the equipped vehicle and views forwardly through an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated; wherein said accessory module comprises a casing that at least partially encases said camera; wherein said imaging sensor is mounted on a printed circuit board that is disposed within said casing of said accessory module; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module into a recess region of said casing; and wherein said mounting element is attached at the inner surface of the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button. 14. The accessory system of claim 13, wherein a multi-pin electrical socket is located at said accessory module and connects with said circuitry disposed within said accessory module, wherein said electrical socket is configured for electrical connection to at least one of (i) an electrical source of the equipped vehicle, (ii) a power source of the equipped vehicle, (iii) a control of the equipped vehicle and (iv) a communication bus of the equipped vehicle. 15. The accessory system of claim 14, wherein said imaging sensor comprises a CMOS photosensor array and wherein said mounting element and said accessory module are adapted for mounting by snap attachment of said accessory module to said mounting element. 16. The accessory system of claim 15, wherein said mounting element is bonded by an adhesive to the windshield. 17. An accessory system for a vehicle comprising a windshield having an outer surface that is exterior of the vehicle and an inner surface that is interior of the vehicle, with the windshield at a windshield angle relative to vertical when mounted in the vehicle, said accessory system comprising:
a mounting element attached at an inner surface of a windshield of a vehicle equipped with said accessory system; wherein said mounting element is adapted for mounting of an accessory module thereto and demounting of said accessory module therefrom; said accessory module adapted for mounting to and demounting from said mounting element; said accessory module comprising a camera, said camera comprising an imaging sensor and a lens; wherein electrical circuitry associated with said camera is disposed within said accessory module; wherein said accessory module comprises a portion that faces towards the windshield when said accessory module is mounted to said mounting element attached at the windshield; said portion comprising wall structure; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module; wherein said accessory module and said mounting element are configured so that, with said accessory module mounted to said mounting element, said lens is facing towards the windshield of the equipped vehicle and said camera has a field of view appropriate for a driver assistance system of the equipped vehicle and views forwardly through an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated; wherein said accessory module comprises a casing that at least partially encases said camera; wherein said imaging sensor is mounted on a printed circuit board that is disposed within said casing of said accessory module; wherein said mounting element is attached at the inner surface of the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button; wherein an electrical socket is located at said accessory module and connects with said circuitry disposed within said accessory module; and wherein said mounting element and said accessory module are adapted for mounting by snap attachment of said accessory module to said mounting element. 18. The accessory system of claim 17, wherein said imaging sensor comprises a CMOS photosensor array and wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module into a recess region of said casing. 19. The accessory system of claim 18, wherein said electrical socket is configured for electrical connection to at least one of (i) an electrical source of the equipped vehicle, (ii) a power source of the equipped vehicle, (iii) a control of the equipped vehicle and (iv) a communication bus of the equipped vehicle. 20. The accessory system of claim 19, wherein said driver assistance system comprises an automatic headlamp control system of the equipped vehicle. | An accessory system for a vehicle includes a mounting element attached at an inner surface of a windshield of the vehicle, with the mounting element adapted for mounting of an accessory module thereto and demounting of the accessory module therefrom. The accessory module includes a camera, and electrical circuitry associated with the camera is disposed within the accessory module. The accessory module includes a portion that faces towards the windshield when the accessory module is mounted to the mounting element attached at the windshield. The portion includes wall structure, and the lens of the camera protrudes through the wall structure to exterior of the accessory module. The accessory module and mounting element are configured so that, with the accessory module mounted to the mounting element, the lens is facing towards the windshield and the camera has a field of view appropriate for a driver assistance system of the vehicle.1. An accessory system for a vehicle comprising a windshield having an outer surface that is exterior of the vehicle and an inner surface that is interior of the vehicle, with the windshield at a windshield angle relative to vertical when mounted in the vehicle, said accessory system comprising:
a mounting element attached at an inner surface of a windshield of a vehicle equipped with said accessory system; wherein said mounting element is adapted for mounting of an accessory module thereto and demounting of said accessory module therefrom; said accessory module adapted for mounting to and demounting from said mounting element; said accessory module comprising a camera, said camera comprising an imaging sensor and a lens; wherein electrical circuitry associated with said camera is disposed within said accessory module; wherein said accessory module comprises a portion that faces towards the windshield when said accessory module is mounted to said mounting element attached at the windshield; said portion comprising wall structure; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module; wherein said accessory module and said mounting element are configured so that, with said accessory module mounted to said mounting element, said lens is facing towards the windshield of the equipped vehicle and said camera has a field of view appropriate for a driver assistance system of the equipped vehicle and views forwardly through an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated; wherein a multi-pin electrical socket is located at said accessory module and connects with said circuitry disposed within said accessory module; and wherein said electrical socket is configured for electrical connection to at least one of (i) an electrical source of the equipped vehicle, (ii) a power source of the equipped vehicle and (iii) a control of the equipped vehicle. 2. The accessory system of claim 1, wherein said driver assistance system comprises an adaptive cruise control system of the equipped vehicle. 3. The accessory system of claim 2, wherein said electrical socket connects said circuitry with at least one of (i) a communication bus of the equipped vehicle and (ii) a CAN communication bus of the equipped vehicle. 4. The accessory system of claim 1, wherein said mounting element is held by an adhesive at the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button, and wherein said mounting element is at an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated. 5. The accessory system of claim 1, wherein said accessory module comprises a casing that at least partially encases said camera. 6. The accessory system of claim 5, wherein said imaging sensor is mounted on a printed circuit board that is disposed within said casing of said accessory module. 7. The accessory system of claim 6, wherein said imaging sensor comprises a CMOS photosensor array. 8. The accessory system of claim 7, wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module into a recess region of said casing. 9. The accessory system of claim 1, wherein said driver assistance system comprises a lane change assist system of the equipped vehicle. 10. The accessory system of claim 1, wherein said mounting element is bonded via an adhesive to the inner surface of the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button. 11. The accessory system of claim 10, wherein said interior rearview mirror assembly includes an electro-optic reflective element, at least two photosensors, a microprocessor and associated circuitry all commonly established on a common semiconductor element, and wherein said microprocessor is operable to detect a voltage applied to said electro-optic reflective element and to adjust the voltage applied to said electro-optic reflective element via controlling a current gating element, said current gating element controlling an amount of current shunted to ground to control the voltage applied to said electro-optic reflective element, and wherein said interior rearview mirror assembly includes an inductive power receiver that receives power from an inductive power transmitter located remotely from said interior rearview mirror assembly, and wherein said interior rearview mirror assembly includes a data transceiver that wirelessly communicates with a data transceiver located remotely from said interior rearview mirror assembly. 12. The accessory system of claim 1, wherein said electrical socket connects said circuitry with a communication bus of the equipped vehicle, and wherein said mounting element and said accessory module are adapted for mounting by snap attachment of said accessory module to said mounting element. 13. An accessory system for a vehicle comprising a windshield having an outer surface that is exterior of the vehicle and an inner surface that is interior of the vehicle, with the windshield at a windshield angle relative to vertical when mounted in the vehicle, said accessory system comprising:
a mounting element attached at an inner surface of a windshield of a vehicle equipped with said accessory system; wherein said mounting element is adapted for mounting of an accessory module thereto and demounting of said accessory module therefrom; said accessory module adapted for mounting to and demounting from said mounting element; said accessory module comprising a camera, said camera comprising an imaging sensor and a lens; wherein electrical circuitry associated with said camera is disposed within said accessory module; wherein said accessory module comprises a portion that faces towards the windshield when said accessory module is mounted to said mounting element attached at the windshield; said portion comprising wall structure; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module; wherein said accessory module and said mounting element are configured so that, with said accessory module mounted to said mounting element, said lens is facing towards the windshield of the equipped vehicle and said camera has a field of view appropriate for a driver assistance system of the equipped vehicle and views forwardly through an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated; wherein said accessory module comprises a casing that at least partially encases said camera; wherein said imaging sensor is mounted on a printed circuit board that is disposed within said casing of said accessory module; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module into a recess region of said casing; and wherein said mounting element is attached at the inner surface of the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button. 14. The accessory system of claim 13, wherein a multi-pin electrical socket is located at said accessory module and connects with said circuitry disposed within said accessory module, wherein said electrical socket is configured for electrical connection to at least one of (i) an electrical source of the equipped vehicle, (ii) a power source of the equipped vehicle, (iii) a control of the equipped vehicle and (iv) a communication bus of the equipped vehicle. 15. The accessory system of claim 14, wherein said imaging sensor comprises a CMOS photosensor array and wherein said mounting element and said accessory module are adapted for mounting by snap attachment of said accessory module to said mounting element. 16. The accessory system of claim 15, wherein said mounting element is bonded by an adhesive to the windshield. 17. An accessory system for a vehicle comprising a windshield having an outer surface that is exterior of the vehicle and an inner surface that is interior of the vehicle, with the windshield at a windshield angle relative to vertical when mounted in the vehicle, said accessory system comprising:
a mounting element attached at an inner surface of a windshield of a vehicle equipped with said accessory system; wherein said mounting element is adapted for mounting of an accessory module thereto and demounting of said accessory module therefrom; said accessory module adapted for mounting to and demounting from said mounting element; said accessory module comprising a camera, said camera comprising an imaging sensor and a lens; wherein electrical circuitry associated with said camera is disposed within said accessory module; wherein said accessory module comprises a portion that faces towards the windshield when said accessory module is mounted to said mounting element attached at the windshield; said portion comprising wall structure; wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module; wherein said accessory module and said mounting element are configured so that, with said accessory module mounted to said mounting element, said lens is facing towards the windshield of the equipped vehicle and said camera has a field of view appropriate for a driver assistance system of the equipped vehicle and views forwardly through an area of the windshield that is wiped by a windshield wiper of the equipped vehicle when the windshield wiper is activated; wherein said accessory module comprises a casing that at least partially encases said camera; wherein said imaging sensor is mounted on a printed circuit board that is disposed within said casing of said accessory module; wherein said mounting element is attached at the inner surface of the windshield at a first location and wherein a mirror mounting button is attached at the inner surface of the windshield at a second location that is separate from, local to and spaced from said first location, and wherein an interior rearview mirror assembly is mounted to said mirror mounting button, and wherein said accessory module is mountable to and demountable from said mounting element independent of mounting said interior rearview mirror assembly to said mirror mounting button or demounting said interior rearview mirror assembly from said mirror mounting button; wherein an electrical socket is located at said accessory module and connects with said circuitry disposed within said accessory module; and wherein said mounting element and said accessory module are adapted for mounting by snap attachment of said accessory module to said mounting element. 18. The accessory system of claim 17, wherein said imaging sensor comprises a CMOS photosensor array and wherein said lens of said camera protrudes through said wall structure to exterior of said accessory module into a recess region of said casing. 19. The accessory system of claim 18, wherein said electrical socket is configured for electrical connection to at least one of (i) an electrical source of the equipped vehicle, (ii) a power source of the equipped vehicle, (iii) a control of the equipped vehicle and (iv) a communication bus of the equipped vehicle. 20. The accessory system of claim 19, wherein said driver assistance system comprises an automatic headlamp control system of the equipped vehicle. | 2,600 |
9,751 | 9,751 | 14,920,456 | 2,646 | A memory of a personal device may store preferences of a user. A processor of the personal device may be programmed to scan, using a wireless transceiver, for in-vehicle components located within a seating zone of a vehicle in which the personal device is located, identify features of the in-vehicle components, and provide feedback to the user of a notification, using at least one of the in-vehicle components, based on the preferences and features of the in-vehicle components. An in-vehicle component may identify a user request to invoke an augmented user interface for a personal device located in a seating zone of a vehicle; activate a vehicle component interface application of the personal device responsive to the user request; and send, to the vehicle component interface application, address information and authentication information of in-vehicle components in the seating zone providing the augmented user interface to the vehicle component interface application. | 1. A system comprising:
a personal device including
a wireless transceiver;
a memory storing preferences of a user; and
a processor, programmed to
scan, using the transceiver, for in-vehicle components of a seating zone of a vehicle in which the personal device is located,
identify available features of the in-vehicle components, and
provide feedback to the user of a notification, using at least one of the in-vehicle components, based on the preferences and the available features. 2. The system of claim 1, wherein the preferences include a first set of preferences descriptive of feedback to provide to the user when the notification is in response to receiving a communication to the personal device, and a second set of preferences descriptive of feedback to provide to the user when the notification is in response to identifying an upcoming event by the personal device. 3. The system of claim 1, wherein the processor is further programmed to identify the available features of the in-vehicle components located within the seating zone of the vehicle according to type information advertised by the in-vehicle components. 4. The system of claim 1, wherein the in-vehicle components located within the seating zone of the vehicle include a speaker in-vehicle component, and the processor is further programmed to mirror audio output of the personal device to the speaker in-vehicle component. 5. The system of claim 1, wherein the in-vehicle components located within the seating zone of the vehicle include a display in-vehicle component, and the processor is further programmed to mirror display output of the personal device to the display in-vehicle component. 6. The system of claim 1, wherein the in-vehicle components located within the seating zone of the vehicle include a keyboard in-vehicle component, and the processor is further programmed to receive user input to the keyboard in-vehicle component as user input to the personal device. 7. The system of claim 1, wherein the processor is further programmed to:
send a connection information request to one of the in-vehicle components; and receive credentials required for access to other of the in-vehicle components responsive to the connection information request. 8. A method comprising:
identifying, by a credential sharing in-vehicle component, a user request to invoke an augmented user interface for a personal device located in a seating zone of a vehicle; activating a component interface application of the personal device responsive to the user request; and sending, to the component interface application, address information and authentication information of in-vehicle components in the seating zone providing the augmented user interface to the component interface application. 9. The method of claim 8, further comprising identifying the user request according to proximity of a user to a proximity sensor of the credential sharing in-vehicle component. 10. The method of claim 8, further comprising identifying the user request according to proximity of the personal device to a sensor of the credential sharing in-vehicle component. 11. The method of claim 8, wherein the address information includes at least one of a media access control address of one of the in-vehicle components providing the augmented user interface or an internet protocol address of the in-vehicle components providing the augmented user interface, and the authentication information includes a passcode for connection to the one of the in-vehicle components providing the augmented user interface. 12. The method of claim 8, further comprising providing feedback responsive to a notification to a user of the personal device using the augmented user interface based on user preferences stored to the personal device and identified features of the in-vehicle components. 13. The method of claim 12, wherein the notification is in response to receiving a communication to the personal device. 14. The method of claim 12, wherein the notification is in response to identifying an upcoming event by the personal device. 15. A non-transitory computer-readable medium embodying instructions that, when executed by a processor of a personal device, cause the personal device to:
receive, responsive to a connection information request, credentials required for access to in-vehicle components located within a seating zone of a vehicle in which the personal device is located, the in-vehicle components providing an augmented user interface to the personal device; connect to display, speaker, and hotspot in-vehicle components of the seating zone using information included in the credentials; and use the in-vehicle components providing the augmented user interface to send feedback to a user of a notification. 16. The medium of claim 15, further embodying instructions that, when executed by a processor of a personal device, cause the personal device to:
maintain preferences of the user of the personal device; and send the feedback using at least one of the in-vehicle components, based on the preferences and identified features of the in-vehicle components. 17. The medium of claim 16, wherein the preferences include a first set of preferences descriptive of feedback to provide to the user when the notification is in response to receiving a communication to the personal device, and a second set of preferences descriptive of feedback to provide to the user when the notification is in response to identifying an upcoming event by the personal device. 18. The medium of claim 15, further embodying instructions that, when executed by a processor of a personal device, cause the personal device to request the augmented user interface. | A memory of a personal device may store preferences of a user. A processor of the personal device may be programmed to scan, using a wireless transceiver, for in-vehicle components located within a seating zone of a vehicle in which the personal device is located, identify features of the in-vehicle components, and provide feedback to the user of a notification, using at least one of the in-vehicle components, based on the preferences and features of the in-vehicle components. An in-vehicle component may identify a user request to invoke an augmented user interface for a personal device located in a seating zone of a vehicle; activate a vehicle component interface application of the personal device responsive to the user request; and send, to the vehicle component interface application, address information and authentication information of in-vehicle components in the seating zone providing the augmented user interface to the vehicle component interface application.1. A system comprising:
a personal device including
a wireless transceiver;
a memory storing preferences of a user; and
a processor, programmed to
scan, using the transceiver, for in-vehicle components of a seating zone of a vehicle in which the personal device is located,
identify available features of the in-vehicle components, and
provide feedback to the user of a notification, using at least one of the in-vehicle components, based on the preferences and the available features. 2. The system of claim 1, wherein the preferences include a first set of preferences descriptive of feedback to provide to the user when the notification is in response to receiving a communication to the personal device, and a second set of preferences descriptive of feedback to provide to the user when the notification is in response to identifying an upcoming event by the personal device. 3. The system of claim 1, wherein the processor is further programmed to identify the available features of the in-vehicle components located within the seating zone of the vehicle according to type information advertised by the in-vehicle components. 4. The system of claim 1, wherein the in-vehicle components located within the seating zone of the vehicle include a speaker in-vehicle component, and the processor is further programmed to mirror audio output of the personal device to the speaker in-vehicle component. 5. The system of claim 1, wherein the in-vehicle components located within the seating zone of the vehicle include a display in-vehicle component, and the processor is further programmed to mirror display output of the personal device to the display in-vehicle component. 6. The system of claim 1, wherein the in-vehicle components located within the seating zone of the vehicle include a keyboard in-vehicle component, and the processor is further programmed to receive user input to the keyboard in-vehicle component as user input to the personal device. 7. The system of claim 1, wherein the processor is further programmed to:
send a connection information request to one of the in-vehicle components; and receive credentials required for access to other of the in-vehicle components responsive to the connection information request. 8. A method comprising:
identifying, by a credential sharing in-vehicle component, a user request to invoke an augmented user interface for a personal device located in a seating zone of a vehicle; activating a component interface application of the personal device responsive to the user request; and sending, to the component interface application, address information and authentication information of in-vehicle components in the seating zone providing the augmented user interface to the component interface application. 9. The method of claim 8, further comprising identifying the user request according to proximity of a user to a proximity sensor of the credential sharing in-vehicle component. 10. The method of claim 8, further comprising identifying the user request according to proximity of the personal device to a sensor of the credential sharing in-vehicle component. 11. The method of claim 8, wherein the address information includes at least one of a media access control address of one of the in-vehicle components providing the augmented user interface or an internet protocol address of the in-vehicle components providing the augmented user interface, and the authentication information includes a passcode for connection to the one of the in-vehicle components providing the augmented user interface. 12. The method of claim 8, further comprising providing feedback responsive to a notification to a user of the personal device using the augmented user interface based on user preferences stored to the personal device and identified features of the in-vehicle components. 13. The method of claim 12, wherein the notification is in response to receiving a communication to the personal device. 14. The method of claim 12, wherein the notification is in response to identifying an upcoming event by the personal device. 15. A non-transitory computer-readable medium embodying instructions that, when executed by a processor of a personal device, cause the personal device to:
receive, responsive to a connection information request, credentials required for access to in-vehicle components located within a seating zone of a vehicle in which the personal device is located, the in-vehicle components providing an augmented user interface to the personal device; connect to display, speaker, and hotspot in-vehicle components of the seating zone using information included in the credentials; and use the in-vehicle components providing the augmented user interface to send feedback to a user of a notification. 16. The medium of claim 15, further embodying instructions that, when executed by a processor of a personal device, cause the personal device to:
maintain preferences of the user of the personal device; and send the feedback using at least one of the in-vehicle components, based on the preferences and identified features of the in-vehicle components. 17. The medium of claim 16, wherein the preferences include a first set of preferences descriptive of feedback to provide to the user when the notification is in response to receiving a communication to the personal device, and a second set of preferences descriptive of feedback to provide to the user when the notification is in response to identifying an upcoming event by the personal device. 18. The medium of claim 15, further embodying instructions that, when executed by a processor of a personal device, cause the personal device to request the augmented user interface. | 2,600 |
9,752 | 9,752 | 15,085,685 | 2,647 | A first smart card in a first wireless communication device receives a first profile that indicates a subscription to provide wireless connectivity to a user. The first profile is a copy of at least a portion of a second profile previously established by the user and stored on a second smart card in a second wireless communication device. The first wireless communication device then establishes a first wireless connection with a network using the subscription indicated by the first profile. | 1. A method comprising:
receiving, at a first smart card in a first wireless communication device, a first profile that indicates a subscription to provide wireless connectivity to a user, wherein the first profile is a copy of at least a portion of a second profile previously established by the user and stored on a second smart card in a second wireless communication device; and establishing a first wireless connection between the first wireless communication device and a network using the subscription indicated by the first profile. 2. The method of claim 1, wherein the first profile comprises at least one of an integrated circuit card identifier (ICCID) that identifies the second smart card, an international mobile subscriber identity (IMSI) that uniquely identifies the user, a phone number, a mobile station international subscriber directory number (MSISDN) associated with the user, and a security key shared by the second wireless communication device and the network. 3. The method of claim 1, wherein receiving the first profile comprises receiving the first profile from a subscription manager in response to the subscription manager receiving a first request from the second wireless communication device to generate the copy of at least the portion of the second profile. 4. The method of claim 3, further comprising:
providing a second request from the first wireless communication device to download the first profile from the subscription manager, and wherein receiving the first profile comprises receiving the first profile in response to providing the second request. 5. The method of claim 4, wherein receiving the first profile comprises receiving the first profile via a second wireless connection between the first wireless communication device and the second wireless communication device. 6. The method of claim 4, wherein receiving the first profile comprises receiving the first profile via a wireless connection between the first wireless communication device and the network. 7. The method of claim 1, further comprising:
terminating the first wireless connection; and removing the first profile from the first smart card in response to terminating the first wireless connection. 8. A method comprising:
storing, on a first smart card implemented in a first wireless communication device, a first profile that indicates a subscription to provide wireless connectivity to a user associated with the first smart card; and providing, from the first wireless communication device, a first request to generate a copy of at least a portion of the first profile for provision to a second smart card implemented in a second wireless communication device. 9. The method of claim 8, wherein the first profile comprises at least one of an integrated circuit card identifier (ICCID) that identifies the first smart card, an international mobile subscriber identity (IMSI) that uniquely identifies the user, a phone number associated with the user, a mobile station international subscriber directory number (MSISDN) associated with the user, and a security key shared by the second wireless communication device and a network. 10. The method of claim 8, further comprising:
activating the first smart card in response to installing the first smart card in the first wireless communication device; and establishing a first wireless connection between the first wireless communication device and a network based on the subscription indicated in the first profile. 11. The method of claim 10, further comprising:
terminating the first wireless connection in response to the second wireless communication device initiating establishment of a second wireless connection with the network based on the subscription indicated in the copy of at least the portion of the first profile. 12. The method of claim 11, further comprising:
providing a request from the first wireless communication device to terminate the second wireless connection; and reestablishing the first wireless connection in response to terminating the second wireless connection. 13. A method comprising:
receiving, at a subscription manager for a network, a first request to generate a copy of at least a portion of a first profile stored in a first smart card in a first wireless communication device for provision to a second smart card in a second wireless communication device, wherein the first profile was previously established by a user to indicate a subscription to provide wireless connectivity to the user; generating the copy of at least the portion of the first profile in response to receiving the first request; and providing the copy of at least the portion of the first profile to the second smart card. 14. The method of claim 13, wherein the first profile comprises at least one of an integrated circuit card identifier (ICCID) that identifies the first smart card, an international mobile subscriber identity (IMSI) that uniquely identifies the user, a phone number associated with the user, a mobile station international subscriber directory number (MSISDN) associated with the user, and a first security key shared by the first wireless communication device and the network. 15. The method of claim 14, further comprising:
generating a second security key shared by the second wireless communication device and the network; and appending the second security key to the copy of at least the portion of the first profile. 16. The method of claim 13, wherein receiving the first request to provide at least the portion of the copy of the first profile comprises receiving the first request via a first wireless connection established between the first wireless communication device and the network on the basis of the subscription indicated in the first profile. 17. The method of claim 13, further comprising:
receiving a second request to provide the copy of at least the portion of the first profile to the second smart card; verifying that the second request was initiated by the user; and providing at least the portion of the copy of the first profile in response to receiving the second request. 18. A first wireless communication device, comprising:
a first smart card configured to receive a first profile that indicates a subscription to provide wireless connectivity to a user, wherein the first profile is a copy of at least a portion of a second profile previously established by the user and stored on a second smart card in a second wireless communication device; and a transceiver configured to establish a first wireless connection between the first wireless communication device and a network using the subscription indicated by the first profile. 19. A first wireless communication device comprising:
a smart card configured to store a first profile that indicates a subscription to provide wireless connectivity to a user associated with the first smart card; and a transceiver configured to provide a first request to generate a copy of at least a portion of the first profile for provision to a second smart card implemented in a second wireless communication device. 20. An apparatus comprising:
a transceiver to receive a first request to generate a copy of at least a portion of a first profile stored in a first smart card in a first wireless communication device for provision to a second smart card in a second wireless communication device, wherein the first profile was previously established by a user to indicate a subscription to provide wireless connectivity to the user; a processor to generate the copy of at least the portion of the first profile in response to receiving the first request, wherein the transceiver is configured to provide the copy of at least the portion of the first profile to the second smart card. | A first smart card in a first wireless communication device receives a first profile that indicates a subscription to provide wireless connectivity to a user. The first profile is a copy of at least a portion of a second profile previously established by the user and stored on a second smart card in a second wireless communication device. The first wireless communication device then establishes a first wireless connection with a network using the subscription indicated by the first profile.1. A method comprising:
receiving, at a first smart card in a first wireless communication device, a first profile that indicates a subscription to provide wireless connectivity to a user, wherein the first profile is a copy of at least a portion of a second profile previously established by the user and stored on a second smart card in a second wireless communication device; and establishing a first wireless connection between the first wireless communication device and a network using the subscription indicated by the first profile. 2. The method of claim 1, wherein the first profile comprises at least one of an integrated circuit card identifier (ICCID) that identifies the second smart card, an international mobile subscriber identity (IMSI) that uniquely identifies the user, a phone number, a mobile station international subscriber directory number (MSISDN) associated with the user, and a security key shared by the second wireless communication device and the network. 3. The method of claim 1, wherein receiving the first profile comprises receiving the first profile from a subscription manager in response to the subscription manager receiving a first request from the second wireless communication device to generate the copy of at least the portion of the second profile. 4. The method of claim 3, further comprising:
providing a second request from the first wireless communication device to download the first profile from the subscription manager, and wherein receiving the first profile comprises receiving the first profile in response to providing the second request. 5. The method of claim 4, wherein receiving the first profile comprises receiving the first profile via a second wireless connection between the first wireless communication device and the second wireless communication device. 6. The method of claim 4, wherein receiving the first profile comprises receiving the first profile via a wireless connection between the first wireless communication device and the network. 7. The method of claim 1, further comprising:
terminating the first wireless connection; and removing the first profile from the first smart card in response to terminating the first wireless connection. 8. A method comprising:
storing, on a first smart card implemented in a first wireless communication device, a first profile that indicates a subscription to provide wireless connectivity to a user associated with the first smart card; and providing, from the first wireless communication device, a first request to generate a copy of at least a portion of the first profile for provision to a second smart card implemented in a second wireless communication device. 9. The method of claim 8, wherein the first profile comprises at least one of an integrated circuit card identifier (ICCID) that identifies the first smart card, an international mobile subscriber identity (IMSI) that uniquely identifies the user, a phone number associated with the user, a mobile station international subscriber directory number (MSISDN) associated with the user, and a security key shared by the second wireless communication device and a network. 10. The method of claim 8, further comprising:
activating the first smart card in response to installing the first smart card in the first wireless communication device; and establishing a first wireless connection between the first wireless communication device and a network based on the subscription indicated in the first profile. 11. The method of claim 10, further comprising:
terminating the first wireless connection in response to the second wireless communication device initiating establishment of a second wireless connection with the network based on the subscription indicated in the copy of at least the portion of the first profile. 12. The method of claim 11, further comprising:
providing a request from the first wireless communication device to terminate the second wireless connection; and reestablishing the first wireless connection in response to terminating the second wireless connection. 13. A method comprising:
receiving, at a subscription manager for a network, a first request to generate a copy of at least a portion of a first profile stored in a first smart card in a first wireless communication device for provision to a second smart card in a second wireless communication device, wherein the first profile was previously established by a user to indicate a subscription to provide wireless connectivity to the user; generating the copy of at least the portion of the first profile in response to receiving the first request; and providing the copy of at least the portion of the first profile to the second smart card. 14. The method of claim 13, wherein the first profile comprises at least one of an integrated circuit card identifier (ICCID) that identifies the first smart card, an international mobile subscriber identity (IMSI) that uniquely identifies the user, a phone number associated with the user, a mobile station international subscriber directory number (MSISDN) associated with the user, and a first security key shared by the first wireless communication device and the network. 15. The method of claim 14, further comprising:
generating a second security key shared by the second wireless communication device and the network; and appending the second security key to the copy of at least the portion of the first profile. 16. The method of claim 13, wherein receiving the first request to provide at least the portion of the copy of the first profile comprises receiving the first request via a first wireless connection established between the first wireless communication device and the network on the basis of the subscription indicated in the first profile. 17. The method of claim 13, further comprising:
receiving a second request to provide the copy of at least the portion of the first profile to the second smart card; verifying that the second request was initiated by the user; and providing at least the portion of the copy of the first profile in response to receiving the second request. 18. A first wireless communication device, comprising:
a first smart card configured to receive a first profile that indicates a subscription to provide wireless connectivity to a user, wherein the first profile is a copy of at least a portion of a second profile previously established by the user and stored on a second smart card in a second wireless communication device; and a transceiver configured to establish a first wireless connection between the first wireless communication device and a network using the subscription indicated by the first profile. 19. A first wireless communication device comprising:
a smart card configured to store a first profile that indicates a subscription to provide wireless connectivity to a user associated with the first smart card; and a transceiver configured to provide a first request to generate a copy of at least a portion of the first profile for provision to a second smart card implemented in a second wireless communication device. 20. An apparatus comprising:
a transceiver to receive a first request to generate a copy of at least a portion of a first profile stored in a first smart card in a first wireless communication device for provision to a second smart card in a second wireless communication device, wherein the first profile was previously established by a user to indicate a subscription to provide wireless connectivity to the user; a processor to generate the copy of at least the portion of the first profile in response to receiving the first request, wherein the transceiver is configured to provide the copy of at least the portion of the first profile to the second smart card. | 2,600 |
9,753 | 9,753 | 14,609,812 | 2,657 | A voice input by a vehicle user is taken as a basis for determining at least one keyword from a set of prescribed keywords. The at least one keyword is taken as a basis for determining at least one event and/or at least one state from a set of events and/or states of the vehicle that are stored during a prescribed period of time. This involves the respective event and/or the respective state being stored in conjunction with at least one condition occurrence that characterizes a respective condition that needs to be met in order for the event to occur and/or the respective state to exist. In addition, a response is determined from a set of prescribed responses on the basis of the condition occurrence that is associated with the determined event and/or state. Furthermore, a signaling signal is determined on the basis of the ascertained response. | 1. A method for operating a voice-controlled information system for a vehicle, the method comprising the acts of:
determining, based on a voice input by a vehicle user, at least one keyword from a set of prescribed keywords; determining, based on the determined at least one keyword, at least one event and/or at least one state from a set of events and/or states of the vehicle that are stored during a prescribed period of time, wherein the respective event and/or the state is stored in conjunction with at least one condition occurrence characterizing a respective condition that needs to be met in order for the event to occur and/or the state to exist; determining a response from a set of prescribed responses based on the condition occurrence associated with the determined event and/or determined state; and determining a signaling signal based on the determined response. 2. The method according to claim 1, wherein
the at least one keyword is taken as a basis for determining a content section of a prescribed interactive instruction manual, and the signaling signal is determined based on the content section. 3. The method according to claim 2, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. 4. The method according to claim 1, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. 5. An apparatus for operating a voice-controlled information system, comprising:
a control unit comprising a processor executing a stored program, the stored program having program code segments that: determine, based on a voice input by a vehicle user, at least one keyword from a set of prescribed keywords; determine, based on the determined at least one keyword, at least one event and/or at least one state from a set of events and/or states of the vehicle that are stored during a prescribed period of time, wherein the respective event and/or the state is stored in conjunction with at least one condition occurrence characterizing a respective condition that needs to be met in order for the event to occur and/or state to exist; determine a response from a set of prescribed responses based on the condition occurrence associated with the determined event and/or determined state; and determine a signaling signal based on the determined response. 6. The apparatus according to claim 5, wherein
the at least one keyword is taken as a basis for determining a content section of a prescribed interactive instruction manual, and the signaling signal is determined based on the content section. 7. The apparatus according to claim 6, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. 8. The apparatus according to claim 5, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. | A voice input by a vehicle user is taken as a basis for determining at least one keyword from a set of prescribed keywords. The at least one keyword is taken as a basis for determining at least one event and/or at least one state from a set of events and/or states of the vehicle that are stored during a prescribed period of time. This involves the respective event and/or the respective state being stored in conjunction with at least one condition occurrence that characterizes a respective condition that needs to be met in order for the event to occur and/or the respective state to exist. In addition, a response is determined from a set of prescribed responses on the basis of the condition occurrence that is associated with the determined event and/or state. Furthermore, a signaling signal is determined on the basis of the ascertained response.1. A method for operating a voice-controlled information system for a vehicle, the method comprising the acts of:
determining, based on a voice input by a vehicle user, at least one keyword from a set of prescribed keywords; determining, based on the determined at least one keyword, at least one event and/or at least one state from a set of events and/or states of the vehicle that are stored during a prescribed period of time, wherein the respective event and/or the state is stored in conjunction with at least one condition occurrence characterizing a respective condition that needs to be met in order for the event to occur and/or the state to exist; determining a response from a set of prescribed responses based on the condition occurrence associated with the determined event and/or determined state; and determining a signaling signal based on the determined response. 2. The method according to claim 1, wherein
the at least one keyword is taken as a basis for determining a content section of a prescribed interactive instruction manual, and the signaling signal is determined based on the content section. 3. The method according to claim 2, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. 4. The method according to claim 1, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. 5. An apparatus for operating a voice-controlled information system, comprising:
a control unit comprising a processor executing a stored program, the stored program having program code segments that: determine, based on a voice input by a vehicle user, at least one keyword from a set of prescribed keywords; determine, based on the determined at least one keyword, at least one event and/or at least one state from a set of events and/or states of the vehicle that are stored during a prescribed period of time, wherein the respective event and/or the state is stored in conjunction with at least one condition occurrence characterizing a respective condition that needs to be met in order for the event to occur and/or state to exist; determine a response from a set of prescribed responses based on the condition occurrence associated with the determined event and/or determined state; and determine a signaling signal based on the determined response. 6. The apparatus according to claim 5, wherein
the at least one keyword is taken as a basis for determining a content section of a prescribed interactive instruction manual, and the signaling signal is determined based on the content section. 7. The apparatus according to claim 6, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. 8. The apparatus according to claim 5, wherein
the response is determined based on a prescribed characteristic property of the vehicle explicitly identifying the vehicle. | 2,600 |
9,754 | 9,754 | 15,209,527 | 2,612 | An electronic device includes a deformable housing and a flexible display supported by the deformable housing. One or more flex sensors supported by the deformable housing detect the electronic device is deformed at a deformation portion to partition the flexible display into a first portion and a second portion. One or more processors present content on the first portion of the flexible display in response to detecting the flexible display being deformed, and remediate the second portion of the flexible display to compensate performance degradation of the flexible display resulting from presenting content on the first portion of the flexible display. | 1. An electronic device, comprising;
a deformable housing; a flexible display supported by the deformable housing; one or more flex sensors supported by the deformable housing, the one or more flex sensors detecting when the electronic device is deformed at a deformation portion to partition the flexible display into a first portion and a second portion; and one or more processors operable with the flexible display and the one or more flex sensors, the one or more processors presenting content on the first portion of the flexible display in response to detecting the flexible display being deformed, and remediating the second portion of the flexible display to compensate performance degradation of the flexible display resulting from the presenting content on the first portion of the flexible display. 2. The electronic device of claim 1, the performance degradation comprising one or more of loss of brightness or discoloration of the first portion of the flexible display caused by non-uniform usage of the first portion relative to the second portion. 3. The electronic device of claim 1, the one or more processors remediating the second portion of the flexible display by presenting other content that is complementary to the content on the second portion of the flexible display. 4. The electronic device of claim 3, the other content comprising a mirror image of the content. 5. The electronic device of claim 4, the presenting and the remediating occurring concurrently. 6. The electronic device of claim 1, the one or more processors detecting a docking operation transitioning the electronic device to a docked mode of operation, the remediating occurring while the electronic device is in the docked mode of operation. 7. The electronic device of claim 1, the one or more processors further monitoring a presentation characteristic of the content during the presenting, wherein the remediating is a function of the presentation characteristic. 8. The electronic device of claim 7, the presentation characteristic one or more of an ON time of the first portion, a brightness of the first portion, an ON pixel value of the first portion, or combinations thereof. 9. An electronic device, comprising;
a deformable housing; a flexible display supported by the deformable housing; one or more flex sensors supported by the deformable housing, the one or more flex sensors detecting when the electronic device is deformed at a deformation portion to partition the flexible display into a first portion and a second portion; and one or more processors operable with the flexible display and the one or more flex sensors, the one or more processors selecting one of the first portion or the second portion on which to present content to remediate the one of the first portion or the second portion to compensate performance degradation as a function of a content presentation history the first portion and the second portion. 10. The electronic device of claim 9, the one or more processors further presenting the content on the one of the first portion or the second portion. 11. The electronic device of claim 9, the one or more processors further presenting a prompt on another of the first portion or the second portion indicating that the content is presented on the one of the first portion or the second portion. 12. The electronic device of claim 10, the content presentation history comprising a record of an ON time of the first portion and the second portion, a brightness of the first portion and the second portion, an ON pixel value of the first portion and the second portion, or combinations thereof. 13. The electronic device of claim 10, the content presentation history comprising a record of a type of application causing presentation of the content on the first portion and the second portion. 14. The electronic device of claim 10, the one or more processors further causing the another of the first portion or the second portion to enter a low-power or sleep mode. 15. A method, comprising:
detecting, with one or more flex sensors of an electronic device, deformation of a flexible display by a bend; determining, with one or more processors, a portion of the flexible display, disposed to one side of the bend, and requiring remediation to compensate performance degradation of the flexible display resulting from the presenting content to the one side of the bend or to another side of the bend; and remediating the portion of the flexible display. 16. The method of claim 15, further comprising presenting, with the one or more processors, the content on the portion of the flexible display disposed to the one side of the bend, and remediating another portion of the flexible display to the another side of the bend. 17. The method of claim 15, the presenting and the remediating occurring simultaneously. 18. The method of claim 15, further comprising presenting a prompt to the another side of the bend, the prompt instructing content presentation on the portion to the one side of the bend. 19. The method of claim 17, further comprising transitioning portions of the flexible display disposed to the another side of the bend to a low-power or sleep mode of operation. 20. The method of claim 15, further comprising detecting a docking operation transitioning an electronic device comprising the flexible display to a docked mode of operation, the remediating occurring during the docked mode of operation. | An electronic device includes a deformable housing and a flexible display supported by the deformable housing. One or more flex sensors supported by the deformable housing detect the electronic device is deformed at a deformation portion to partition the flexible display into a first portion and a second portion. One or more processors present content on the first portion of the flexible display in response to detecting the flexible display being deformed, and remediate the second portion of the flexible display to compensate performance degradation of the flexible display resulting from presenting content on the first portion of the flexible display.1. An electronic device, comprising;
a deformable housing; a flexible display supported by the deformable housing; one or more flex sensors supported by the deformable housing, the one or more flex sensors detecting when the electronic device is deformed at a deformation portion to partition the flexible display into a first portion and a second portion; and one or more processors operable with the flexible display and the one or more flex sensors, the one or more processors presenting content on the first portion of the flexible display in response to detecting the flexible display being deformed, and remediating the second portion of the flexible display to compensate performance degradation of the flexible display resulting from the presenting content on the first portion of the flexible display. 2. The electronic device of claim 1, the performance degradation comprising one or more of loss of brightness or discoloration of the first portion of the flexible display caused by non-uniform usage of the first portion relative to the second portion. 3. The electronic device of claim 1, the one or more processors remediating the second portion of the flexible display by presenting other content that is complementary to the content on the second portion of the flexible display. 4. The electronic device of claim 3, the other content comprising a mirror image of the content. 5. The electronic device of claim 4, the presenting and the remediating occurring concurrently. 6. The electronic device of claim 1, the one or more processors detecting a docking operation transitioning the electronic device to a docked mode of operation, the remediating occurring while the electronic device is in the docked mode of operation. 7. The electronic device of claim 1, the one or more processors further monitoring a presentation characteristic of the content during the presenting, wherein the remediating is a function of the presentation characteristic. 8. The electronic device of claim 7, the presentation characteristic one or more of an ON time of the first portion, a brightness of the first portion, an ON pixel value of the first portion, or combinations thereof. 9. An electronic device, comprising;
a deformable housing; a flexible display supported by the deformable housing; one or more flex sensors supported by the deformable housing, the one or more flex sensors detecting when the electronic device is deformed at a deformation portion to partition the flexible display into a first portion and a second portion; and one or more processors operable with the flexible display and the one or more flex sensors, the one or more processors selecting one of the first portion or the second portion on which to present content to remediate the one of the first portion or the second portion to compensate performance degradation as a function of a content presentation history the first portion and the second portion. 10. The electronic device of claim 9, the one or more processors further presenting the content on the one of the first portion or the second portion. 11. The electronic device of claim 9, the one or more processors further presenting a prompt on another of the first portion or the second portion indicating that the content is presented on the one of the first portion or the second portion. 12. The electronic device of claim 10, the content presentation history comprising a record of an ON time of the first portion and the second portion, a brightness of the first portion and the second portion, an ON pixel value of the first portion and the second portion, or combinations thereof. 13. The electronic device of claim 10, the content presentation history comprising a record of a type of application causing presentation of the content on the first portion and the second portion. 14. The electronic device of claim 10, the one or more processors further causing the another of the first portion or the second portion to enter a low-power or sleep mode. 15. A method, comprising:
detecting, with one or more flex sensors of an electronic device, deformation of a flexible display by a bend; determining, with one or more processors, a portion of the flexible display, disposed to one side of the bend, and requiring remediation to compensate performance degradation of the flexible display resulting from the presenting content to the one side of the bend or to another side of the bend; and remediating the portion of the flexible display. 16. The method of claim 15, further comprising presenting, with the one or more processors, the content on the portion of the flexible display disposed to the one side of the bend, and remediating another portion of the flexible display to the another side of the bend. 17. The method of claim 15, the presenting and the remediating occurring simultaneously. 18. The method of claim 15, further comprising presenting a prompt to the another side of the bend, the prompt instructing content presentation on the portion to the one side of the bend. 19. The method of claim 17, further comprising transitioning portions of the flexible display disposed to the another side of the bend to a low-power or sleep mode of operation. 20. The method of claim 15, further comprising detecting a docking operation transitioning an electronic device comprising the flexible display to a docked mode of operation, the remediating occurring during the docked mode of operation. | 2,600 |
9,755 | 9,755 | 14,479,902 | 2,659 | A method, computer-readable storage medium, and device for converting user handwriting into text information by reflecting a paragraph form of the user handwriting in an electronic device is provided. The method includes receiving a handwriting input from a user, recognizing the received handwriting input and converting the recognized handwriting input into text information, recognizing a paragraph form of the received handwriting input, and applying the recognized paragraph form to the converted text information. | 1. A method of converting user handwriting into text information by reflecting a paragraph form of the user handwriting in an electronic device, the method comprising:
receiving a handwriting input from a user; recognizing the received handwriting input and converting the recognized handwriting input into text information; recognizing a paragraph form of the received handwriting input; and applying the recognized paragraph form to the converted text information. 2. The method of claim 1, wherein,
the paragraph form comprises a paragraph shape, and recognizing the paragraph form comprises recognizing, as the paragraph shape, at least one selected from indent, outdent, word spacing, alignment, line spacing, and an outline level. 3. The method of claim 1, wherein,
the paragraph form comprises a font of a letter included in a paragraph, and recognizing the paragraph form comprises: calculating an average size of letters included in the handwriting input; recognizing letters, having a size within a predetermined range from the calculated average size, as having a size of a font corresponding to the average size; and recognizing letters, having a size exceeding a range from the calculated average size, as having a larger font size than the average size. 4. The method of claim 1, wherein,
the paragraph form comprises a bullet, and recognizing the paragraph form comprises recognizing a figure, which distinguishes a sentence or a paragraph, as a bullet. 5. The method of claim 1, further comprising:
recognizing an entity included in the received handwriting input; and applying information, which is predetermined in correspondence with the recognized entity, to the converted text information. 6. The method of claim 1, further comprising displaying at least one selected from the received handwriting input and the converted text information. 7. The method of claim 6, wherein displaying at least one selected from the received handwriting input and the converted text information comprises storing the received handwriting input and the converted text information, and converting therebetween according to a request of a user. 8. The method of claim 1, wherein recognizing the paragraph form of the received handwriting input comprises:
determining a region, in which the handwriting input exists, as an entire region to recognize a paragraph form candidate group in the entire region; and determining a region, in which a sentence included in the handwriting input exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. 9. The method of claim 1, wherein recognizing the paragraph form of the received handwriting input comprises:
receiving an input, which selects a Region Of Interest (ROI) in the handwriting input, from the user to recognize a paragraph form candidate group in the ROI; and determining a region, in which a sentence included in the ROI exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. 10. A non-transitory computer-readable storage medium having recorded thereon a computer program for executing a method of converting user handwriting into text information by reflecting a paragraph form of the user handwriting in an electronic device, the method comprising:
receiving a handwriting input from a user; recognizing the received handwriting input and converting the recognized handwriting input into text information; recognizing a paragraph form of the received handwriting input; and applying the recognized paragraph form to the converted text information. 11. An electronic device comprising:
a user handwriting input receiving unit configured to receive a handwriting input from a user; a text information converting unit configured to recognize the received handwriting input and convert the recognized handwriting input into text information; a paragraph form recognizing unit configured to recognize a paragraph form of the received handwriting input; and a form applying unit configured to apply the recognized paragraph form to the converted text information. 12. The electronic device of claim 11, wherein,
the paragraph form comprises a paragraph shape, and the paragraph form recognizing unit is configured to recognize, as the paragraph shape, at least one selected from indent, outdent, word spacing, alignment, line spacing, and an outline level. 13. The electronic device of claim 11, wherein,
the paragraph form comprises a font of a letter included in a paragraph, and the paragraph form recognizing unit calculates an average of letters included in the handwriting input, recognizes letters, having a size within a predetermined range from the calculated average, as having a size of a font corresponding to the average, and recognizes letters, having a size exceeding a range from the calculated average, as having a larger font size than an average size. 14. The electronic device of claim 11, wherein,
the paragraph form comprises a bullet, and the paragraph form recognizing unit is configured to recognize a figure, which distinguishes a sentence or a paragraph, as a bullet. 15. The electronic device of claim 11, further comprising an entity recognizing unit configured to recognize an entity included in the received handwriting input,
wherein the form applying unit is configured to apply information, which is predetermined in correspondence with the recognized entity, to the converted text information. 16. The electronic device of claim 11, further comprising a display unit configured to display at least one selected from the received handwriting input and the converted text information. 17. The electronic device of claim 16, wherein the display unit is configured to store the received handwriting input and the converted text information, and convert therebetween according to a request of a user. 18. The electronic device of claim 11, wherein the paragraph form recognizing unit is configured to determine a region, in which the handwriting input exists, as an entire region to recognize a paragraph form candidate group in the entire region, and determine a region, in which a sentence included in the handwriting input exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. 19. The electronic device of claim 11, wherein the paragraph form recognizing unit is configured to receive an input, which selects a Region Of Interest (ROI) in the handwriting input, from the user to recognize a paragraph form candidate group in the ROI, and determine a region, in which a sentence included in the ROI exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. | A method, computer-readable storage medium, and device for converting user handwriting into text information by reflecting a paragraph form of the user handwriting in an electronic device is provided. The method includes receiving a handwriting input from a user, recognizing the received handwriting input and converting the recognized handwriting input into text information, recognizing a paragraph form of the received handwriting input, and applying the recognized paragraph form to the converted text information.1. A method of converting user handwriting into text information by reflecting a paragraph form of the user handwriting in an electronic device, the method comprising:
receiving a handwriting input from a user; recognizing the received handwriting input and converting the recognized handwriting input into text information; recognizing a paragraph form of the received handwriting input; and applying the recognized paragraph form to the converted text information. 2. The method of claim 1, wherein,
the paragraph form comprises a paragraph shape, and recognizing the paragraph form comprises recognizing, as the paragraph shape, at least one selected from indent, outdent, word spacing, alignment, line spacing, and an outline level. 3. The method of claim 1, wherein,
the paragraph form comprises a font of a letter included in a paragraph, and recognizing the paragraph form comprises: calculating an average size of letters included in the handwriting input; recognizing letters, having a size within a predetermined range from the calculated average size, as having a size of a font corresponding to the average size; and recognizing letters, having a size exceeding a range from the calculated average size, as having a larger font size than the average size. 4. The method of claim 1, wherein,
the paragraph form comprises a bullet, and recognizing the paragraph form comprises recognizing a figure, which distinguishes a sentence or a paragraph, as a bullet. 5. The method of claim 1, further comprising:
recognizing an entity included in the received handwriting input; and applying information, which is predetermined in correspondence with the recognized entity, to the converted text information. 6. The method of claim 1, further comprising displaying at least one selected from the received handwriting input and the converted text information. 7. The method of claim 6, wherein displaying at least one selected from the received handwriting input and the converted text information comprises storing the received handwriting input and the converted text information, and converting therebetween according to a request of a user. 8. The method of claim 1, wherein recognizing the paragraph form of the received handwriting input comprises:
determining a region, in which the handwriting input exists, as an entire region to recognize a paragraph form candidate group in the entire region; and determining a region, in which a sentence included in the handwriting input exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. 9. The method of claim 1, wherein recognizing the paragraph form of the received handwriting input comprises:
receiving an input, which selects a Region Of Interest (ROI) in the handwriting input, from the user to recognize a paragraph form candidate group in the ROI; and determining a region, in which a sentence included in the ROI exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. 10. A non-transitory computer-readable storage medium having recorded thereon a computer program for executing a method of converting user handwriting into text information by reflecting a paragraph form of the user handwriting in an electronic device, the method comprising:
receiving a handwriting input from a user; recognizing the received handwriting input and converting the recognized handwriting input into text information; recognizing a paragraph form of the received handwriting input; and applying the recognized paragraph form to the converted text information. 11. An electronic device comprising:
a user handwriting input receiving unit configured to receive a handwriting input from a user; a text information converting unit configured to recognize the received handwriting input and convert the recognized handwriting input into text information; a paragraph form recognizing unit configured to recognize a paragraph form of the received handwriting input; and a form applying unit configured to apply the recognized paragraph form to the converted text information. 12. The electronic device of claim 11, wherein,
the paragraph form comprises a paragraph shape, and the paragraph form recognizing unit is configured to recognize, as the paragraph shape, at least one selected from indent, outdent, word spacing, alignment, line spacing, and an outline level. 13. The electronic device of claim 11, wherein,
the paragraph form comprises a font of a letter included in a paragraph, and the paragraph form recognizing unit calculates an average of letters included in the handwriting input, recognizes letters, having a size within a predetermined range from the calculated average, as having a size of a font corresponding to the average, and recognizes letters, having a size exceeding a range from the calculated average, as having a larger font size than an average size. 14. The electronic device of claim 11, wherein,
the paragraph form comprises a bullet, and the paragraph form recognizing unit is configured to recognize a figure, which distinguishes a sentence or a paragraph, as a bullet. 15. The electronic device of claim 11, further comprising an entity recognizing unit configured to recognize an entity included in the received handwriting input,
wherein the form applying unit is configured to apply information, which is predetermined in correspondence with the recognized entity, to the converted text information. 16. The electronic device of claim 11, further comprising a display unit configured to display at least one selected from the received handwriting input and the converted text information. 17. The electronic device of claim 16, wherein the display unit is configured to store the received handwriting input and the converted text information, and convert therebetween according to a request of a user. 18. The electronic device of claim 11, wherein the paragraph form recognizing unit is configured to determine a region, in which the handwriting input exists, as an entire region to recognize a paragraph form candidate group in the entire region, and determine a region, in which a sentence included in the handwriting input exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. 19. The electronic device of claim 11, wherein the paragraph form recognizing unit is configured to receive an input, which selects a Region Of Interest (ROI) in the handwriting input, from the user to recognize a paragraph form candidate group in the ROI, and determine a region, in which a sentence included in the ROI exists, as a sentence region to recognize, as a paragraph form, at least one paragraph form of the paragraph form candidate group in the sentence region. | 2,600 |
9,756 | 9,756 | 14,502,404 | 2,645 | One embodiment provides a method, comprising: detecting, using a processor, a notification at an information handling device; detecting, using a processor, availability of an input associated with an user at the information handling device; and selecting, using a processor, an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. Other aspects are described and claimed. | 1. A method, comprising:
detecting, using a processor, a notification at an information handling device; detecting, using a processor, availability of an input associated with an user at the information handling device; and selecting, using a processor, an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. 2. The method of claim 1, wherein the selecting an operating mode comprises an operating mode selected from the group consisting of: silence, vibrate, and full sound. 3. The method of claim 1, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined amount of time. 4. The method of claim 1, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined number of notifications. 5. The method of claim 1, further comprising:
receiving user input at the information handling device; and reestablishing a previous operating mode of the information handling device. 6. The method of claim 1, further comprising:
determining a preconfigured amount of time has passed since the selecting of the operating mode of the information handling device; and reestablishing a previous operating mode of the information handling device. 7. The method of claim 1, further comprising:
detecting location data associated with the information handling device; and reestablishing a previous operating mode of the information handling device based upon the location data associated with the information handling device. 8. The method of claim 7, further comprising comparing the location data associated with the information handling device to a rule set, wherein the rule set comprises locations in which the previous operating mode should be reestablished. 9. The method of claim 1, wherein the notification is in response to a trigger event and the trigger event comprises an event selected from the group consisting of: an incoming communication, a reminder, and an information handling device alert. 10. The method of claim 1, wherein the noise produced during the operating mode of the information handling device comprises a noise selected from the group consisting of: auditory and haptic. 11. The method of claim 1, wherein the availability of an input associated with a user is selected from the group consisting of: presence of user input, absence of user input, presence of user, absence of user, and presence of a personal area network signal associated with a user. 12. The method of claim 1, wherein modulates is selected from the group consisting of: silencing in the case of an absence being detected and sounding in the case of a presence being detected. 13. An information handling device, comprising:
a processor; a memory device that stores instructions executable by the processor to: detect a notification at an information handling device; detect availability of an input associated with a user at the information handling device; and select an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. 14. The information handling device of claim 13, wherein to select an operating mode comprises an operating mode selected from the group consisting of: silence, vibrate, and full sound. 15. The information handling device of claim 13, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined amount of time. 16. The information handling device of claim 13, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined number of notifications. 17. The information handling device of claim 13, wherein the instructions are further executable to:
receive user input at the information handling device; and reestablish a previous operating mode of the information handling device. 18. The information handling device of claim 13, wherein the instructions are further executable to:
determine a preconfigured amount of time has passed since the selecting of the operating mode of the information handling device; and reestablish a previous operating mode of the information handling device. 19. The information handling device of claim 13, wherein the instructions are further executable to:
detect location data associated with the information handling device; and reestablish a previous operating mode of the information handling device based upon the location data associated with the information handling device. 20. The information handling device of claim 19, wherein the instructions are further executable to compare the location data associated with the information handling device to a rule set, wherein the rule set comprises locations in which the previous operating mode should be reestablished. 21. The information handling device of claim 13, wherein the notification is in response to a trigger event and the trigger event comprises an event selected from the group consisting of: an incoming communication, a reminder, and an information handling device alert. 22. The information handling device of claim 13, wherein the availability of an input associated with a user is selected from the group consisting of: presence of user input, absence of user input, presence of user, absence of user, and presence of a personal area network signal associated with a user. 23. The information handling device of claim 13, wherein modulates is selected from the group consisting of: silencing in the case of an absence being detected and sounding in the case of a presence being detected. 24. A product, comprising:
a storage device having code stored therewith, the code being executable by the processor and comprising: code that detects a notification at an information handling device; code that detects availability of an input associated with a user at the information handling device; and code that selects an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. | One embodiment provides a method, comprising: detecting, using a processor, a notification at an information handling device; detecting, using a processor, availability of an input associated with an user at the information handling device; and selecting, using a processor, an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. Other aspects are described and claimed.1. A method, comprising:
detecting, using a processor, a notification at an information handling device; detecting, using a processor, availability of an input associated with an user at the information handling device; and selecting, using a processor, an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. 2. The method of claim 1, wherein the selecting an operating mode comprises an operating mode selected from the group consisting of: silence, vibrate, and full sound. 3. The method of claim 1, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined amount of time. 4. The method of claim 1, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined number of notifications. 5. The method of claim 1, further comprising:
receiving user input at the information handling device; and reestablishing a previous operating mode of the information handling device. 6. The method of claim 1, further comprising:
determining a preconfigured amount of time has passed since the selecting of the operating mode of the information handling device; and reestablishing a previous operating mode of the information handling device. 7. The method of claim 1, further comprising:
detecting location data associated with the information handling device; and reestablishing a previous operating mode of the information handling device based upon the location data associated with the information handling device. 8. The method of claim 7, further comprising comparing the location data associated with the information handling device to a rule set, wherein the rule set comprises locations in which the previous operating mode should be reestablished. 9. The method of claim 1, wherein the notification is in response to a trigger event and the trigger event comprises an event selected from the group consisting of: an incoming communication, a reminder, and an information handling device alert. 10. The method of claim 1, wherein the noise produced during the operating mode of the information handling device comprises a noise selected from the group consisting of: auditory and haptic. 11. The method of claim 1, wherein the availability of an input associated with a user is selected from the group consisting of: presence of user input, absence of user input, presence of user, absence of user, and presence of a personal area network signal associated with a user. 12. The method of claim 1, wherein modulates is selected from the group consisting of: silencing in the case of an absence being detected and sounding in the case of a presence being detected. 13. An information handling device, comprising:
a processor; a memory device that stores instructions executable by the processor to: detect a notification at an information handling device; detect availability of an input associated with a user at the information handling device; and select an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. 14. The information handling device of claim 13, wherein to select an operating mode comprises an operating mode selected from the group consisting of: silence, vibrate, and full sound. 15. The information handling device of claim 13, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined amount of time. 16. The information handling device of claim 13, wherein the availability of an input associated with a user comprises an absence of user input for a predetermined number of notifications. 17. The information handling device of claim 13, wherein the instructions are further executable to:
receive user input at the information handling device; and reestablish a previous operating mode of the information handling device. 18. The information handling device of claim 13, wherein the instructions are further executable to:
determine a preconfigured amount of time has passed since the selecting of the operating mode of the information handling device; and reestablish a previous operating mode of the information handling device. 19. The information handling device of claim 13, wherein the instructions are further executable to:
detect location data associated with the information handling device; and reestablish a previous operating mode of the information handling device based upon the location data associated with the information handling device. 20. The information handling device of claim 19, wherein the instructions are further executable to compare the location data associated with the information handling device to a rule set, wherein the rule set comprises locations in which the previous operating mode should be reestablished. 21. The information handling device of claim 13, wherein the notification is in response to a trigger event and the trigger event comprises an event selected from the group consisting of: an incoming communication, a reminder, and an information handling device alert. 22. The information handling device of claim 13, wherein the availability of an input associated with a user is selected from the group consisting of: presence of user input, absence of user input, presence of user, absence of user, and presence of a personal area network signal associated with a user. 23. The information handling device of claim 13, wherein modulates is selected from the group consisting of: silencing in the case of an absence being detected and sounding in the case of a presence being detected. 24. A product, comprising:
a storage device having code stored therewith, the code being executable by the processor and comprising: code that detects a notification at an information handling device; code that detects availability of an input associated with a user at the information handling device; and code that selects an operating mode of the information handling device, wherein the selected operating mode of the information handling device comprises a mode that modulates a noise. | 2,600 |
9,757 | 9,757 | 15,079,543 | 2,616 | A processor employs a hierarchical register file for a graphics processing unit (GPU). A top level of the hierarchical register file is stored at a local memory of the GPU (e.g., a memory on the same integrated circuit die as the GPU). Lower levels of the hierarchical register file are stored at a different, larger memory, such as a remote memory located on a different die than the GPU. A register file control module monitors the status of in-flight wavefronts at the GPU, and in particular whether each in-flight wavefront is active, predicted to be become active, or inactive. The register file control module places execution data for active and predicted-active wavefronts in the top level of the hierarchical register file and places execution data for inactive wavefronts at lower levels of the hierarchical register file. | 1. A processing system comprising:
a processor to couple to a first memory implementing a first register file and to couple to a second memory implementing a second register file, the processor comprising:
a graphics processing unit (GPU) to execute a plurality of wavefronts, the GPU comprising a register file control module having an active wavefront predictor coupled to an inactive wavefront detector, the register file control module configured, in response to identifying a wavefront activity status for at least one of the wavefronts, to perform at least one of:
when the wavefront activity status is identified as an active wavefront ready for execution at the GPU, storing execution data for the at least one wavefront at the first register file;
when the wavefront activity status is identified as predicted to become an active wavefront by the active wavefront predictor, storing execution data for the at least one wavefront at the first register file; and
when the wavefront activity status is identified as an inactive wavefront awaiting execution by the inactive wavefront detector, storing execution data for the at least one wavefront at the second register file. 2. The processing system of claim 1, wherein the register file control module is to:
store the execution data for the at least one wavefront at the second register file by transferring the execution data for the wavefront from the first register file to the second register file. 3. The processing system of claim 2, wherein the register file control module is to:
store the execution data for the at least one wavefront at the first register file by transferring the execution data for the at least one wavefront from the second register file to the first register file in response to predicting the at least one wavefront is to become an active wavefront. 4. The processing system of claim 2 wherein the register file control module is to:
after transferring the execution data for the at least one wavefront from the first register file to the second register file, transfer the execution data for the at least one wavefront from the second register file to the first register file in response to predicting that the at least one wavefront is to become an active wavefront. 5. The processing system of claim 2, wherein the register file control module is to identify that the at least wavefront is an inactive wavefront in response to:
the GPU initiating execution of an instruction; and identifying that the instruction has been marked as to indicate high-latency. 6. The processing system of claim 2, further comprising a timer, and wherein the register file control module is to identify that the at least one wavefront is an inactive wavefront in response to:
initiating the timer in response to the GPU initiating execution of an instruction; and identifying that the timer has exceeded a threshold prior to the GPU completing execution of the instruction. 7. The processing system of claim 2, wherein the GPU further comprises:
a buffer to store execution results for the at least wavefront while its execution data is stored at the second register file. 8. The processing system of claim 7, wherein the GPU is to:
transfer the execution results from the buffer to the second register file. 9. The processing system of claim 2, wherein the processing system comprises a die-stacked memory device including:
a first die comprising the GPU and the first memory; and a second die comprising the second memory. 10. A method comprising:
in response to a processor that has a first memory coupled to a second memory and a graphics processing unit (GPU) coupled to a register file control module, identifying, using the register file control module having an active wavefront predictor coupled to an inactive wavefront detector, a first wavefront of a plurality of wavefronts pending for execution at the GPU is an active wavefront, storing execution data for the first wavefront at a first register file implemented at the first memory; in response to the active wavefront predictor predicting a second wavefront of the plurality of wavefronts is to become an active wavefront, storing execution data for the second wavefront at the first register file at the first memory; and in response to the inactive wavefront detector identifying a third wavefront of the plurality of wavefronts is an inactive wavefront awaiting execution, storing execution data for the third wavefront at a second register file implemented at the second memory separate from the first memory. 11. The method of claim 10, wherein storing the execution data for the third wavefront at the second register file comprises:
transferring the execution data for the third wavefront from the first register file to the second register file. 12. The method of claim 11, wherein storing the execution data for the second wavefront at the first register file comprises:
transferring the execution data for the second wavefront from the second register file to the first register file in response to predicting the second wavefront is to become an active wavefront. 13. The method of claim 11, further comprising:
after transferring the execution data for the third wavefront from the first register file to the second register file, transferring the execution data for the third wavefront from the second register file to the first register file in response to a prediction that the third wavefront is to become an active wavefront. 14. The method of claim 10, further comprising:
identifying the third wavefront is an inactive wavefront in response to:
the GPU initiating execution of an instruction; and
identifying that the instruction has been marked as to indicate high latency. 15. The method of claim 10, further comprising:
identifying the third wavefront is an inactive wavefront in response to:
initiating a timer in response to the GPU initiating execution of an instruction; and
identifying that the timer has exceeded a threshold prior to the GPU completing execution of the instruction. 16. The method of claim 10, further comprising:
buffering results of an instruction for the third wavefront at the GPU while the execution data for the third wavefront is stored at the second register file. 17. The method of claim 16, further comprising:
transferring the results of the instruction from the buffer to the second register file. 18. The method of claim 16, further comprising:
in response to predicting that the third wavefront is to become an active wavefront:
transferring the results of the instruction for the third wavefront from the second register file to the first register file; and
transferring the execution results from the buffer to the first register file. 19. The method of claim 10, wherein:
the second memory is located at stacked memory die of die-stacked processing system, the die-stacked processing system comprising a set of one or more stacked memory dies and comprising a set of one or more logic dies electrically coupled to the set of one or more stacked memory dies; and the first memory is located in one of the set of one or more logic dies. 20. A method comprising:
identifying, utilizing a processor that has a first memory coupled to a second memory and a graphics processing unit coupled to a register file control module, at the graphics processing unit using an active wavefront predictor or an inactive wavefront detector coupled in the register file control module, a status of a wavefront as an active status indicating the wavefront is ready for execution, a predicted-active states indicating the wavefront is predicted to be ready for execution, and an inactive status indicating the wavefront is stalled; selecting one of a plurality of register files based on the identified status of the wavefront; and storing execution data for the wavefront at the selected one of the plurality of register files at either the first memory or second memory based the status of the wavefront. | A processor employs a hierarchical register file for a graphics processing unit (GPU). A top level of the hierarchical register file is stored at a local memory of the GPU (e.g., a memory on the same integrated circuit die as the GPU). Lower levels of the hierarchical register file are stored at a different, larger memory, such as a remote memory located on a different die than the GPU. A register file control module monitors the status of in-flight wavefronts at the GPU, and in particular whether each in-flight wavefront is active, predicted to be become active, or inactive. The register file control module places execution data for active and predicted-active wavefronts in the top level of the hierarchical register file and places execution data for inactive wavefronts at lower levels of the hierarchical register file.1. A processing system comprising:
a processor to couple to a first memory implementing a first register file and to couple to a second memory implementing a second register file, the processor comprising:
a graphics processing unit (GPU) to execute a plurality of wavefronts, the GPU comprising a register file control module having an active wavefront predictor coupled to an inactive wavefront detector, the register file control module configured, in response to identifying a wavefront activity status for at least one of the wavefronts, to perform at least one of:
when the wavefront activity status is identified as an active wavefront ready for execution at the GPU, storing execution data for the at least one wavefront at the first register file;
when the wavefront activity status is identified as predicted to become an active wavefront by the active wavefront predictor, storing execution data for the at least one wavefront at the first register file; and
when the wavefront activity status is identified as an inactive wavefront awaiting execution by the inactive wavefront detector, storing execution data for the at least one wavefront at the second register file. 2. The processing system of claim 1, wherein the register file control module is to:
store the execution data for the at least one wavefront at the second register file by transferring the execution data for the wavefront from the first register file to the second register file. 3. The processing system of claim 2, wherein the register file control module is to:
store the execution data for the at least one wavefront at the first register file by transferring the execution data for the at least one wavefront from the second register file to the first register file in response to predicting the at least one wavefront is to become an active wavefront. 4. The processing system of claim 2 wherein the register file control module is to:
after transferring the execution data for the at least one wavefront from the first register file to the second register file, transfer the execution data for the at least one wavefront from the second register file to the first register file in response to predicting that the at least one wavefront is to become an active wavefront. 5. The processing system of claim 2, wherein the register file control module is to identify that the at least wavefront is an inactive wavefront in response to:
the GPU initiating execution of an instruction; and identifying that the instruction has been marked as to indicate high-latency. 6. The processing system of claim 2, further comprising a timer, and wherein the register file control module is to identify that the at least one wavefront is an inactive wavefront in response to:
initiating the timer in response to the GPU initiating execution of an instruction; and identifying that the timer has exceeded a threshold prior to the GPU completing execution of the instruction. 7. The processing system of claim 2, wherein the GPU further comprises:
a buffer to store execution results for the at least wavefront while its execution data is stored at the second register file. 8. The processing system of claim 7, wherein the GPU is to:
transfer the execution results from the buffer to the second register file. 9. The processing system of claim 2, wherein the processing system comprises a die-stacked memory device including:
a first die comprising the GPU and the first memory; and a second die comprising the second memory. 10. A method comprising:
in response to a processor that has a first memory coupled to a second memory and a graphics processing unit (GPU) coupled to a register file control module, identifying, using the register file control module having an active wavefront predictor coupled to an inactive wavefront detector, a first wavefront of a plurality of wavefronts pending for execution at the GPU is an active wavefront, storing execution data for the first wavefront at a first register file implemented at the first memory; in response to the active wavefront predictor predicting a second wavefront of the plurality of wavefronts is to become an active wavefront, storing execution data for the second wavefront at the first register file at the first memory; and in response to the inactive wavefront detector identifying a third wavefront of the plurality of wavefronts is an inactive wavefront awaiting execution, storing execution data for the third wavefront at a second register file implemented at the second memory separate from the first memory. 11. The method of claim 10, wherein storing the execution data for the third wavefront at the second register file comprises:
transferring the execution data for the third wavefront from the first register file to the second register file. 12. The method of claim 11, wherein storing the execution data for the second wavefront at the first register file comprises:
transferring the execution data for the second wavefront from the second register file to the first register file in response to predicting the second wavefront is to become an active wavefront. 13. The method of claim 11, further comprising:
after transferring the execution data for the third wavefront from the first register file to the second register file, transferring the execution data for the third wavefront from the second register file to the first register file in response to a prediction that the third wavefront is to become an active wavefront. 14. The method of claim 10, further comprising:
identifying the third wavefront is an inactive wavefront in response to:
the GPU initiating execution of an instruction; and
identifying that the instruction has been marked as to indicate high latency. 15. The method of claim 10, further comprising:
identifying the third wavefront is an inactive wavefront in response to:
initiating a timer in response to the GPU initiating execution of an instruction; and
identifying that the timer has exceeded a threshold prior to the GPU completing execution of the instruction. 16. The method of claim 10, further comprising:
buffering results of an instruction for the third wavefront at the GPU while the execution data for the third wavefront is stored at the second register file. 17. The method of claim 16, further comprising:
transferring the results of the instruction from the buffer to the second register file. 18. The method of claim 16, further comprising:
in response to predicting that the third wavefront is to become an active wavefront:
transferring the results of the instruction for the third wavefront from the second register file to the first register file; and
transferring the execution results from the buffer to the first register file. 19. The method of claim 10, wherein:
the second memory is located at stacked memory die of die-stacked processing system, the die-stacked processing system comprising a set of one or more stacked memory dies and comprising a set of one or more logic dies electrically coupled to the set of one or more stacked memory dies; and the first memory is located in one of the set of one or more logic dies. 20. A method comprising:
identifying, utilizing a processor that has a first memory coupled to a second memory and a graphics processing unit coupled to a register file control module, at the graphics processing unit using an active wavefront predictor or an inactive wavefront detector coupled in the register file control module, a status of a wavefront as an active status indicating the wavefront is ready for execution, a predicted-active states indicating the wavefront is predicted to be ready for execution, and an inactive status indicating the wavefront is stalled; selecting one of a plurality of register files based on the identified status of the wavefront; and storing execution data for the wavefront at the selected one of the plurality of register files at either the first memory or second memory based the status of the wavefront. | 2,600 |
9,758 | 9,758 | 15,098,010 | 2,625 | Example systems and methods for determining application availability are described. In one implementation, a vehicle entertainment system establishes a communication link with a mobile device near the vehicle. The vehicle entertainment system receives, from the mobile device, an identification of currently available applications on the mobile device. The vehicle entertainment system also updates an in-vehicle user interface to display the currently available applications on the mobile device to at least one occupant of the vehicle. | 1. A method comprising:
a vehicle entertainment system establishing a communication link with a mobile device proximate the vehicle; the vehicle entertainment system receiving, from the mobile device, an identification of currently available applications on the mobile device; and the vehicle entertainment system updating an in-vehicle user interface to display the currently available applications on the mobile device to at least one occupant of the vehicle. 2. The method of claim 1, further comprising the in-vehicle user interface displaying vehicle-based applications currently available through the vehicle entertainment system. 3. The method of claim 1, wherein the identification of currently available applications is received from the mobile device using a projection technology. 4. The method of claim 1, wherein the communication link is a wireless communication link. 5. The method of claim 1, wherein the vehicle entertainment system automatically establishes the communication link with the mobile device when the mobile device is a predetermined distance from the vehicle. 6. The method of claim 1, further comprising the vehicle entertainment system requesting approval from the mobile device to display all currently available applications to the at least one occupant of the vehicle. 7. The method of claim 1, further comprising the vehicle entertainment system receiving, from the mobile device, an identification of which currently available applications can be displayed to the at least one occupant of the vehicle. 8. The method of claim 1, further comprising the vehicle entertainment system periodically querying the mobile device to determine any changes to the currently available applications on the mobile device. 9. The method of claim 8, further comprising the vehicle entertainment system updating the in-vehicle user interface to display the updated currently available applications on the mobile device. 10. The method of claim 1, further comprising the vehicle entertainment system querying the mobile device to determine whether a particular application is currently available on the mobile device. 11. A method comprising:
a vehicle entertainment system establishing a communication link with a mobile device proximate the vehicle; the vehicle entertainment system querying the mobile device to determine whether a particular application is currently available on the mobile device. the vehicle entertainment system receiving, from the mobile device, an identification of whether the particular application is currently available on the mobile device using a projection technology; and the vehicle entertainment system updating an in-vehicle user interface to display the particular application responsive to receiving an identification that the particular application is currently available on the mobile device. 12. The method of claim 11, further comprising the in-vehicle user interface displaying vehicle-based applications currently available through the vehicle entertainment system. 13. The method of claim 11, further comprising:
the vehicle entertainment system querying the mobile device to determine whether a second application is currently available on the mobile device; the vehicle entertainment system receiving, from the mobile device, an identification of whether the second application is currently available on the mobile device; and the vehicle entertainment system updating an in-vehicle user interface to display the second application responsive to receiving an identification that the second application is currently available on the mobile device. 14. The method of claim 11, further comprising:
the vehicle entertainment system querying the mobile device at periodic intervals to determine whether the particular application is still available on the mobile device; the vehicle entertainment system receiving, from the mobile device, an identification of whether the particular application is still available on the mobile device; and the vehicle entertainment system updating an in-vehicle user interface to remove the particular application responsive to receiving an identification that the particular application is not currently available on the mobile device. 15. The method of claim 11, wherein the communication link is a wireless communication link. 16. The method of claim 11, wherein the vehicle entertainment system automatically establishes the communication link with the mobile device when the mobile device is a predetermined distance from the vehicle. 17. The method of claim 11, further comprising the vehicle entertainment system periodically querying the mobile device to identify any changes to the currently available applications on the mobile device. 18. The method of claim 17, further comprising the vehicle entertainment system updating the in-vehicle user interface to display the updated currently available applications on the mobile device. 19. A vehicle comprising:
a communication module configured to establish a communication link with a mobile device proximate the vehicle, the communication module further configured to receive an identification of currently available applications on the mobile device; a user interface manager configured to update the available application information and other data presented in a user interface; and a display device configured to display the available application information and other data to at least one occupant of the vehicle. 20. The vehicle of claim 19, further comprising a mobile device manager that periodically queries the mobile device to determine any changes to the currently available applications on the mobile device. | Example systems and methods for determining application availability are described. In one implementation, a vehicle entertainment system establishes a communication link with a mobile device near the vehicle. The vehicle entertainment system receives, from the mobile device, an identification of currently available applications on the mobile device. The vehicle entertainment system also updates an in-vehicle user interface to display the currently available applications on the mobile device to at least one occupant of the vehicle.1. A method comprising:
a vehicle entertainment system establishing a communication link with a mobile device proximate the vehicle; the vehicle entertainment system receiving, from the mobile device, an identification of currently available applications on the mobile device; and the vehicle entertainment system updating an in-vehicle user interface to display the currently available applications on the mobile device to at least one occupant of the vehicle. 2. The method of claim 1, further comprising the in-vehicle user interface displaying vehicle-based applications currently available through the vehicle entertainment system. 3. The method of claim 1, wherein the identification of currently available applications is received from the mobile device using a projection technology. 4. The method of claim 1, wherein the communication link is a wireless communication link. 5. The method of claim 1, wherein the vehicle entertainment system automatically establishes the communication link with the mobile device when the mobile device is a predetermined distance from the vehicle. 6. The method of claim 1, further comprising the vehicle entertainment system requesting approval from the mobile device to display all currently available applications to the at least one occupant of the vehicle. 7. The method of claim 1, further comprising the vehicle entertainment system receiving, from the mobile device, an identification of which currently available applications can be displayed to the at least one occupant of the vehicle. 8. The method of claim 1, further comprising the vehicle entertainment system periodically querying the mobile device to determine any changes to the currently available applications on the mobile device. 9. The method of claim 8, further comprising the vehicle entertainment system updating the in-vehicle user interface to display the updated currently available applications on the mobile device. 10. The method of claim 1, further comprising the vehicle entertainment system querying the mobile device to determine whether a particular application is currently available on the mobile device. 11. A method comprising:
a vehicle entertainment system establishing a communication link with a mobile device proximate the vehicle; the vehicle entertainment system querying the mobile device to determine whether a particular application is currently available on the mobile device. the vehicle entertainment system receiving, from the mobile device, an identification of whether the particular application is currently available on the mobile device using a projection technology; and the vehicle entertainment system updating an in-vehicle user interface to display the particular application responsive to receiving an identification that the particular application is currently available on the mobile device. 12. The method of claim 11, further comprising the in-vehicle user interface displaying vehicle-based applications currently available through the vehicle entertainment system. 13. The method of claim 11, further comprising:
the vehicle entertainment system querying the mobile device to determine whether a second application is currently available on the mobile device; the vehicle entertainment system receiving, from the mobile device, an identification of whether the second application is currently available on the mobile device; and the vehicle entertainment system updating an in-vehicle user interface to display the second application responsive to receiving an identification that the second application is currently available on the mobile device. 14. The method of claim 11, further comprising:
the vehicle entertainment system querying the mobile device at periodic intervals to determine whether the particular application is still available on the mobile device; the vehicle entertainment system receiving, from the mobile device, an identification of whether the particular application is still available on the mobile device; and the vehicle entertainment system updating an in-vehicle user interface to remove the particular application responsive to receiving an identification that the particular application is not currently available on the mobile device. 15. The method of claim 11, wherein the communication link is a wireless communication link. 16. The method of claim 11, wherein the vehicle entertainment system automatically establishes the communication link with the mobile device when the mobile device is a predetermined distance from the vehicle. 17. The method of claim 11, further comprising the vehicle entertainment system periodically querying the mobile device to identify any changes to the currently available applications on the mobile device. 18. The method of claim 17, further comprising the vehicle entertainment system updating the in-vehicle user interface to display the updated currently available applications on the mobile device. 19. A vehicle comprising:
a communication module configured to establish a communication link with a mobile device proximate the vehicle, the communication module further configured to receive an identification of currently available applications on the mobile device; a user interface manager configured to update the available application information and other data presented in a user interface; and a display device configured to display the available application information and other data to at least one occupant of the vehicle. 20. The vehicle of claim 19, further comprising a mobile device manager that periodically queries the mobile device to determine any changes to the currently available applications on the mobile device. | 2,600 |
9,759 | 9,759 | 15,184,312 | 2,689 | A system includes a processor configured to wirelessly receive crash indicia from a vehicle. The processor is also configured to access an occupant profile including medical data relating to an occupant of the vehicle, identify a public safety access point (PSAP), and send the medical data to the identified PSAP, in response to the crash indicia. | 1. A system comprising:
a processor configured to: wirelessly send an instruction to a remote server to transfer vehicle occupant medical data to a public safety assistance point (PSAP), in response to detecting a vehicle accident. 2. The system of claim 1, wherein the processor is configured to:
determine an identity and location within a vehicle of a vehicle occupant; obtain medical data relating to an identified vehicle occupant; and send occupant identification and the obtained related medical data to the remote server. 3. The system of claim 2, wherein the processor is configured to obtain medical data by accessing previously stored medical data relating to the identified vehicle occupant. 4. The system of claim 2, wherein the processor is configured to obtain medical data by requesting medical data from an identified device associated with the vehicle occupant. 5. The system of claim 1, wherein the processor is configured to:
obtain vehicle-related crash data in response to detecting the vehicle accident; and include the vehicle-related crash data with the instruction, wherein the instruction further comprises an instruction to transfer the vehicle-related crash data to the PSAP. 6. The system of claim 5, wherein the vehicle-related crash data includes a camera image of a vehicle interior. 7. The system of claim 5, wherein the vehicle-related crash data includes vehicle sensor data. 8. A system comprising:
a processor configured to: wirelessly receive crash indicia from a vehicle; access an occupant profile including medical data relating to an occupant of the vehicle; identify a public safety access point (PSAP); and send the medical data to the identified PSAP in response to the crash indicia. 9. The system of claim 8, wherein the processor is configured to receive the medical data from the vehicle and store the medical data with respect to the occupant profile. 10. The system of claim 8, wherein the processor is configured to receive the medical data included with the crash indicia. 11. The system of claim 8, wherein the processor is configured to receive vehicle-related crash data included with the crash indicia. 12. The system of claim 11, wherein the processor is configured to send the vehicle-related crash data to the identified PSAP. 13. The system of claim 12, wherein the vehicle-related crash data includes vehicle camera images. 14. The system of claim 12, wherein the vehicle-related crash data includes vehicle sensor readings. 15. The system of claim 8, wherein the processor is configured to identify the PSAP based on a vehicle location included with the crash indicia. 16. The system of claim 8, wherein the processor is configured to identify the PSAP based on a PSAP identification included with the crash indicia. 17. A computer-implemented method comprising:
in response to receiving wireless notification from a vehicle reporting an accident, sending vehicle occupant medical information to a public safety access point (PSAP), the occupant medical information retrieved from an occupant profile including previously saved medical information for an occupant currently present in the vehicle reporting the accident. 18. The method of claim 17, further including sending vehicle system information to the PSAP, the system information having been received as part of the wireless notification. 19. The method of claim 17, further including sending an accident notification to an emergency contact retrieved from the occupant profile in response to receiving the wireless notification. 20. The method of claim 17, wherein the PSAP is identified based on vehicle location information included in the notification. | A system includes a processor configured to wirelessly receive crash indicia from a vehicle. The processor is also configured to access an occupant profile including medical data relating to an occupant of the vehicle, identify a public safety access point (PSAP), and send the medical data to the identified PSAP, in response to the crash indicia.1. A system comprising:
a processor configured to: wirelessly send an instruction to a remote server to transfer vehicle occupant medical data to a public safety assistance point (PSAP), in response to detecting a vehicle accident. 2. The system of claim 1, wherein the processor is configured to:
determine an identity and location within a vehicle of a vehicle occupant; obtain medical data relating to an identified vehicle occupant; and send occupant identification and the obtained related medical data to the remote server. 3. The system of claim 2, wherein the processor is configured to obtain medical data by accessing previously stored medical data relating to the identified vehicle occupant. 4. The system of claim 2, wherein the processor is configured to obtain medical data by requesting medical data from an identified device associated with the vehicle occupant. 5. The system of claim 1, wherein the processor is configured to:
obtain vehicle-related crash data in response to detecting the vehicle accident; and include the vehicle-related crash data with the instruction, wherein the instruction further comprises an instruction to transfer the vehicle-related crash data to the PSAP. 6. The system of claim 5, wherein the vehicle-related crash data includes a camera image of a vehicle interior. 7. The system of claim 5, wherein the vehicle-related crash data includes vehicle sensor data. 8. A system comprising:
a processor configured to: wirelessly receive crash indicia from a vehicle; access an occupant profile including medical data relating to an occupant of the vehicle; identify a public safety access point (PSAP); and send the medical data to the identified PSAP in response to the crash indicia. 9. The system of claim 8, wherein the processor is configured to receive the medical data from the vehicle and store the medical data with respect to the occupant profile. 10. The system of claim 8, wherein the processor is configured to receive the medical data included with the crash indicia. 11. The system of claim 8, wherein the processor is configured to receive vehicle-related crash data included with the crash indicia. 12. The system of claim 11, wherein the processor is configured to send the vehicle-related crash data to the identified PSAP. 13. The system of claim 12, wherein the vehicle-related crash data includes vehicle camera images. 14. The system of claim 12, wherein the vehicle-related crash data includes vehicle sensor readings. 15. The system of claim 8, wherein the processor is configured to identify the PSAP based on a vehicle location included with the crash indicia. 16. The system of claim 8, wherein the processor is configured to identify the PSAP based on a PSAP identification included with the crash indicia. 17. A computer-implemented method comprising:
in response to receiving wireless notification from a vehicle reporting an accident, sending vehicle occupant medical information to a public safety access point (PSAP), the occupant medical information retrieved from an occupant profile including previously saved medical information for an occupant currently present in the vehicle reporting the accident. 18. The method of claim 17, further including sending vehicle system information to the PSAP, the system information having been received as part of the wireless notification. 19. The method of claim 17, further including sending an accident notification to an emergency contact retrieved from the occupant profile in response to receiving the wireless notification. 20. The method of claim 17, wherein the PSAP is identified based on vehicle location information included in the notification. | 2,600 |
9,760 | 9,760 | 14,633,927 | 2,621 | One embodiment provides a method, including: accepting, at an input surface, pen input; determining, using a processor of an electronic device, a modifier key characteristic of the pen input; and executing, using the processor, a modifier key function associated with the pen input. Other embodiments are described and claimed. | 1. A method, comprising:
accepting, at an input surface, pen input; determining, using a processor of an electronic device, a modifier key characteristic of the pen input; and executing, using the processor, a modifier key function associated with the pen input. 2. The method of claim 1, wherein the modifier key characteristic comprises a location of the pen input. 3. The method of claim 2, wherein the location comprises an on-screen keyboard location. 4. The method of claim 2, wherein the location comprises a soft key. 5. The method of claim 4, wherein the executing comprises executing a control key function associated with the soft key. 6. The method of claim 2, wherein the location comprises a non-handwriting input location. 7. The method of claim 1, wherein the modifier key characteristic comprises a physical button press. 8. The method of claim 7, wherein the physical button press comprises a pen button press. 9. The method of claim 7, wherein the physical button press comprises a physical keyboard key press. 10. The method of claim 9, wherein the modifier key characteristic comprises simultaneous detection of the physical keyboard key press and pen movement data. 11. An electronic device, comprising:
an input surface; a processor operatively coupled to the input surface; and a memory that stores instructions executable by the processor to: accept, at the input surface, pen input; determine a modifier key characteristic of the pen input; and execute a modifier key function associated with the pen input. 12. The electronic device of claim 11, wherein the modifier key characteristic comprises a location of the pen input. 13. The electronic device of claim 12, wherein the location comprises an on-screen keyboard location. 14. The electronic device of claim 12, wherein the location comprises a soft key. 15. The electronic device of claim 14, wherein to execute comprises executing a control key function associated with the soft key. 16. The electronic device of claim 12, wherein the location comprises a non-handwriting input location. 17. The electronic device of claim 11, wherein the modifier key characteristic comprises a physical button press. 18. The electronic device of claim 17, wherein the physical button press comprises a pen button press. 19. The electronic device of claim 17, wherein the physical button press comprises a physical keyboard key press. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor of an electronic device and comprising: code that accepts, from an input surface, pen input; code that determines a modifier key characteristic of the pen input; and code that executes a modifier key function associated with the pen input. | One embodiment provides a method, including: accepting, at an input surface, pen input; determining, using a processor of an electronic device, a modifier key characteristic of the pen input; and executing, using the processor, a modifier key function associated with the pen input. Other embodiments are described and claimed.1. A method, comprising:
accepting, at an input surface, pen input; determining, using a processor of an electronic device, a modifier key characteristic of the pen input; and executing, using the processor, a modifier key function associated with the pen input. 2. The method of claim 1, wherein the modifier key characteristic comprises a location of the pen input. 3. The method of claim 2, wherein the location comprises an on-screen keyboard location. 4. The method of claim 2, wherein the location comprises a soft key. 5. The method of claim 4, wherein the executing comprises executing a control key function associated with the soft key. 6. The method of claim 2, wherein the location comprises a non-handwriting input location. 7. The method of claim 1, wherein the modifier key characteristic comprises a physical button press. 8. The method of claim 7, wherein the physical button press comprises a pen button press. 9. The method of claim 7, wherein the physical button press comprises a physical keyboard key press. 10. The method of claim 9, wherein the modifier key characteristic comprises simultaneous detection of the physical keyboard key press and pen movement data. 11. An electronic device, comprising:
an input surface; a processor operatively coupled to the input surface; and a memory that stores instructions executable by the processor to: accept, at the input surface, pen input; determine a modifier key characteristic of the pen input; and execute a modifier key function associated with the pen input. 12. The electronic device of claim 11, wherein the modifier key characteristic comprises a location of the pen input. 13. The electronic device of claim 12, wherein the location comprises an on-screen keyboard location. 14. The electronic device of claim 12, wherein the location comprises a soft key. 15. The electronic device of claim 14, wherein to execute comprises executing a control key function associated with the soft key. 16. The electronic device of claim 12, wherein the location comprises a non-handwriting input location. 17. The electronic device of claim 11, wherein the modifier key characteristic comprises a physical button press. 18. The electronic device of claim 17, wherein the physical button press comprises a pen button press. 19. The electronic device of claim 17, wherein the physical button press comprises a physical keyboard key press. 20. A product, comprising:
a storage device having code stored therewith, the code being executable by a processor of an electronic device and comprising: code that accepts, from an input surface, pen input; code that determines a modifier key characteristic of the pen input; and code that executes a modifier key function associated with the pen input. | 2,600 |
9,761 | 9,761 | 15,057,933 | 2,625 | Haptic feedback remote control systems and methods are provided. A method for providing haptic feedback to a user of a haptic feedback remote control device includes receiving, by a receiving device, an electronic command issued from the haptic feedback remote control device. The receiving device transmits a haptic feedback command to the haptic feedback remote control device. Based on the received haptic feedback command, the haptic feedback remote control activates a haptic feedback device, within the haptic feedback remote control, to provide a haptic feedback effect to a user of the haptic feedback remote control device. | 1. A method for providing haptic feedback to a user of a haptic feedback remote control device, comprising:
providing, by a receiving device, a user interface having a plurality of selectable elements for display on a presentation device; receiving, by the receiving device, an electronic command issued from the haptic feedback remote control device, the electronic command indicating a selection of one of the plurality of selectable elements; performing, by the receiving device, an action corresponding with the indicated selection; transmitting, by the receiving device, a haptic feedback command to the haptic feedback remote control device in response to the performing the action corresponding with the indicated selection; and causing, by the receiving device, a haptic feedback device within the haptic feedback remote control device, to be activated based on the haptic feedback command, to provide a haptic feedback effect to a user of the haptic feedback remote control device, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. 2. The method of claim 1, further comprising:
determining, by the receiving device, whether the received electronic command indicates a valid command to be performed by the receiving device, wherein, the transmitting the haptic feedback command includes transmitting, by the receiving device, the haptic feedback command to the haptic feedback remote control device based on the receiving device determining whether a valid command is indicated. 3. The method of claim 1, further comprising:
determining, by the receiving device, a type of haptic feedback effect to be provided to the user, based on the received electronic command. 4. The method of claim 3, wherein the type of haptic feedback effect to be provided is selected from among a plurality of types of haptic feedback effects, each of the types of haptic feedback effects being distinguishable from one another based on at least one of: quantity, intensity, duration, speed and rhythm of haptic feedback. 5. The method of claim 3, wherein the determining a type of haptic feedback effect to be provided includes determining a portion of the haptic feedback remote control device to experience the haptic feedback effect. 6. The method of claim 1, wherein the haptic feedback device includes a vibration device. 7. The method of claim 1, further comprising:
causing, by the receiving device, the haptic feedback remote control device to enter a haptic feedback mode, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 8. The method of claim 7, wherein the causing the haptic feedback remote control device to enter a haptic feedback mode comprises:
causing the haptic feedback remote control device to automatically enter the haptic feedback mode by the receiving device sending a haptic feedback mode entry command to the haptic feedback remote control device. 9. The method of claim 7, wherein the causing the haptic feedback remote control device to enter a haptic feedback mode comprises:
receiving, by the receiving device, a haptic feedback mode entry command signal from the haptic feedback remote control device. 10. A haptic feedback system, comprising:
a receiving device configured to provide a user interface having a plurality of selectable elements for display on a presentation device; and a haptic feedback remote control device including a haptic feedback device, the haptic feedback remote control device being configured to:
transmit an electronic command to the receiving device, the electronic command indicating a selection of one of the plurality of selectable elements,
receive a haptic feedback command from the receiving device in response to the receiving device performing an action corresponding with the indicated selection, and
activate, based on the received haptic feedback command, the haptic feedback device to provide a haptic feedback effect to a user of the haptic feedback remote control device, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. 11. The haptic feedback system of claim 10, wherein the receiving device is configured to:
receive the transmitted electronic command indicating the selection of one of the plurality of selectable elements; determine whether the received electronic command indicates a valid command to be performed by the receiving device; and transmit a haptic feedback command to the haptic feedback remote control device if based on the receiving device determining whether a valid command is indicated. 12. The haptic feedback system of claim 10, wherein the receiving device is configured to:
receive the transmitted electronic command; and determine a type of haptic feedback effect to be provided to the user, based on the received electronic command. 13. The haptic feedback system of claim 12, wherein the type of haptic feedback effect to be provided is selected from among a plurality of types of haptic feedback effects, each of the types of haptic feedback effects being distinguishable from one another based on at least one of: quantity, intensity, duration, speed and rhythm of haptic feedback. 14. The haptic feedback system of claim 12, wherein the receiving device is further configured to determine a portion of the haptic feedback remote control device to experience the haptic feedback effect. 15. The haptic feedback system of claim 10, wherein the haptic feedback device includes a vibration device. 16. The haptic feedback system of claim 10, wherein the haptic feedback remote control device includes a haptic feedback mode enable/disable input element, the haptic feedback mode enable/disable input element being operable to selectively enter the haptic feedback remote control device into a haptic feedback mode, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 17. The haptic feedback system of claim 10, wherein the receiving device is configured to transmit a haptic feedback mode entry command to the haptic feedback remote control device, and the haptic feedback remote control device is configured to enter a haptic feedback mode upon receipt of the haptic feedback mode entry command, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 18. A haptic feedback remote control device, comprising:
haptic feedback logic; a haptic feedback device; and a processor coupled to the haptic feedback logic and the haptic feedback device, the haptic feedback remote control device being configured to:
transmit an electronic command to a receiving device, the electronic command indicating a selection of one of a plurality of selectable elements of a user interface, provided by the receiving device, and having a plurality of selectable elements for display on a presentation device,
receive a haptic feedback command from the receiving device in response to the receiving device performing an action corresponding with the indicated selection, and
activate, based on the received haptic feedback command, the haptic feedback device to provide a haptic feedback effect to a user of the haptic feedback remote control device, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. 19. The haptic feedback remote control device of claim 18, further comprising:
a haptic feedback mode enable/disable input element, the haptic feedback mode enable/disable input element being operable to selectively enter the haptic feedback remote control device into a haptic feedback mode, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 20. A haptic feedback system, comprising:
a receiving device configured to provide a user interface having a plurality of selectable elements for display on a presentation device; a haptic feedback remote control device configured to transmit an electronic command to the receiving device, the electronic command indicating a selection of one of the plurality of selectable elements; and haptic feedback means, coupled with the haptic feedback remote control device, for providing a haptic feedback effect to a user of the haptic feedback remote control device, based on a haptic feedback command received from the receiving device in response to the receiving device performing an action corresponding with the indicated selection, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. | Haptic feedback remote control systems and methods are provided. A method for providing haptic feedback to a user of a haptic feedback remote control device includes receiving, by a receiving device, an electronic command issued from the haptic feedback remote control device. The receiving device transmits a haptic feedback command to the haptic feedback remote control device. Based on the received haptic feedback command, the haptic feedback remote control activates a haptic feedback device, within the haptic feedback remote control, to provide a haptic feedback effect to a user of the haptic feedback remote control device.1. A method for providing haptic feedback to a user of a haptic feedback remote control device, comprising:
providing, by a receiving device, a user interface having a plurality of selectable elements for display on a presentation device; receiving, by the receiving device, an electronic command issued from the haptic feedback remote control device, the electronic command indicating a selection of one of the plurality of selectable elements; performing, by the receiving device, an action corresponding with the indicated selection; transmitting, by the receiving device, a haptic feedback command to the haptic feedback remote control device in response to the performing the action corresponding with the indicated selection; and causing, by the receiving device, a haptic feedback device within the haptic feedback remote control device, to be activated based on the haptic feedback command, to provide a haptic feedback effect to a user of the haptic feedback remote control device, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. 2. The method of claim 1, further comprising:
determining, by the receiving device, whether the received electronic command indicates a valid command to be performed by the receiving device, wherein, the transmitting the haptic feedback command includes transmitting, by the receiving device, the haptic feedback command to the haptic feedback remote control device based on the receiving device determining whether a valid command is indicated. 3. The method of claim 1, further comprising:
determining, by the receiving device, a type of haptic feedback effect to be provided to the user, based on the received electronic command. 4. The method of claim 3, wherein the type of haptic feedback effect to be provided is selected from among a plurality of types of haptic feedback effects, each of the types of haptic feedback effects being distinguishable from one another based on at least one of: quantity, intensity, duration, speed and rhythm of haptic feedback. 5. The method of claim 3, wherein the determining a type of haptic feedback effect to be provided includes determining a portion of the haptic feedback remote control device to experience the haptic feedback effect. 6. The method of claim 1, wherein the haptic feedback device includes a vibration device. 7. The method of claim 1, further comprising:
causing, by the receiving device, the haptic feedback remote control device to enter a haptic feedback mode, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 8. The method of claim 7, wherein the causing the haptic feedback remote control device to enter a haptic feedback mode comprises:
causing the haptic feedback remote control device to automatically enter the haptic feedback mode by the receiving device sending a haptic feedback mode entry command to the haptic feedback remote control device. 9. The method of claim 7, wherein the causing the haptic feedback remote control device to enter a haptic feedback mode comprises:
receiving, by the receiving device, a haptic feedback mode entry command signal from the haptic feedback remote control device. 10. A haptic feedback system, comprising:
a receiving device configured to provide a user interface having a plurality of selectable elements for display on a presentation device; and a haptic feedback remote control device including a haptic feedback device, the haptic feedback remote control device being configured to:
transmit an electronic command to the receiving device, the electronic command indicating a selection of one of the plurality of selectable elements,
receive a haptic feedback command from the receiving device in response to the receiving device performing an action corresponding with the indicated selection, and
activate, based on the received haptic feedback command, the haptic feedback device to provide a haptic feedback effect to a user of the haptic feedback remote control device, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. 11. The haptic feedback system of claim 10, wherein the receiving device is configured to:
receive the transmitted electronic command indicating the selection of one of the plurality of selectable elements; determine whether the received electronic command indicates a valid command to be performed by the receiving device; and transmit a haptic feedback command to the haptic feedback remote control device if based on the receiving device determining whether a valid command is indicated. 12. The haptic feedback system of claim 10, wherein the receiving device is configured to:
receive the transmitted electronic command; and determine a type of haptic feedback effect to be provided to the user, based on the received electronic command. 13. The haptic feedback system of claim 12, wherein the type of haptic feedback effect to be provided is selected from among a plurality of types of haptic feedback effects, each of the types of haptic feedback effects being distinguishable from one another based on at least one of: quantity, intensity, duration, speed and rhythm of haptic feedback. 14. The haptic feedback system of claim 12, wherein the receiving device is further configured to determine a portion of the haptic feedback remote control device to experience the haptic feedback effect. 15. The haptic feedback system of claim 10, wherein the haptic feedback device includes a vibration device. 16. The haptic feedback system of claim 10, wherein the haptic feedback remote control device includes a haptic feedback mode enable/disable input element, the haptic feedback mode enable/disable input element being operable to selectively enter the haptic feedback remote control device into a haptic feedback mode, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 17. The haptic feedback system of claim 10, wherein the receiving device is configured to transmit a haptic feedback mode entry command to the haptic feedback remote control device, and the haptic feedback remote control device is configured to enter a haptic feedback mode upon receipt of the haptic feedback mode entry command, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 18. A haptic feedback remote control device, comprising:
haptic feedback logic; a haptic feedback device; and a processor coupled to the haptic feedback logic and the haptic feedback device, the haptic feedback remote control device being configured to:
transmit an electronic command to a receiving device, the electronic command indicating a selection of one of a plurality of selectable elements of a user interface, provided by the receiving device, and having a plurality of selectable elements for display on a presentation device,
receive a haptic feedback command from the receiving device in response to the receiving device performing an action corresponding with the indicated selection, and
activate, based on the received haptic feedback command, the haptic feedback device to provide a haptic feedback effect to a user of the haptic feedback remote control device, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. 19. The haptic feedback remote control device of claim 18, further comprising:
a haptic feedback mode enable/disable input element, the haptic feedback mode enable/disable input element being operable to selectively enter the haptic feedback remote control device into a haptic feedback mode, wherein in the haptic feedback mode the haptic feedback remote control device is operable to provide haptic feedback effects. 20. A haptic feedback system, comprising:
a receiving device configured to provide a user interface having a plurality of selectable elements for display on a presentation device; a haptic feedback remote control device configured to transmit an electronic command to the receiving device, the electronic command indicating a selection of one of the plurality of selectable elements; and haptic feedback means, coupled with the haptic feedback remote control device, for providing a haptic feedback effect to a user of the haptic feedback remote control device, based on a haptic feedback command received from the receiving device in response to the receiving device performing an action corresponding with the indicated selection, the haptic feedback effect indicating confirmation of the performed action corresponding with the indicated selection. | 2,600 |
9,762 | 9,762 | 14,904,065 | 2,619 | A method and a system for image layer composition are provided. The method may include detecting whether a plurality of computing devices of a computing system have available resource for performing image layer composition. The method also includes receiving a plurality of image layers and controlling at least one computing device detected having available resource for performing image layer composition to compose the plurality of image layers. Composition efficiency may be improved. | 1. A method for image layer composition, comprising:
detecting whether a plurality of computing devices of a computing system have available resources for performing image layer composition; receiving a plurality of image layers; and controlling at least one computing device detected having the available resources for performing image layer composition to compose the plurality of image layers. 2. The method according to claim 1, wherein the plurality of computing devices of the computing system comprise a graphic processing unit, a central processing unit and a display controller. 3. The method according to claim 1, wherein the plurality of computing devices of the computing system comprise at least two different types of devices. 4. The method according to claim 1, further comprising: determining computation capability required to compose the plurality of image layers, where the at least one computing device is controlled to compose the plurality of image layers based on the determination of the computation capability required to compose the plurality of image layers. 5. The method according to claim 1, wherein if one or more computing devices is detected having the available resources, the one or more computing devices detected having the available resources is controlled to compose the plurality of image layers based on composition speeds and computation loads. 6. A system for image layer composition, comprising a plurality of computing devices and a processing device, wherein the processing device is configured to:
detect whether the plurality of computing devices have available resources for performing image layer composition; and control at least one computing device of the plurality of computing devices detected having the available resources for performing image layer composition to compose a plurality of image layers. 7. The system according to claim 6, wherein the plurality of computing devices of the system comprise a graphic processing unit, a central processing unit and a display controller. 8. The system according to claim 6, wherein the plurality of computing devices of the system comprise at least two different types of devices. 9. The system according to claim 6, wherein the processing device is further configured to: determine computation capability required to compose the plurality of image layers, and control the at least one computing device detected having the available resources for performing image layer composition to compose the plurality of image layers based on the determination of computation capability required to compose the plurality of image layers. 10. The system according to claim 6, wherein if one or more computing devices is detected to have the available resources, the processing device is configured to control the one or more computing devices detected having the available resources to compose the plurality of image layers based on composition speeds and computation loads. 11. The system of claim 6 wherein the processing device is further configured to detect whether the plurality of image layers are required to be composed prior to detecting whether the plurality of computing devices have available resources available. 12. The system of claim 11 wherein the processing device is further configured to detect whether the plurality of image layers are required to be composed by detecting information corresponding to the plurality of image layers in frame buffers. 13. The system of claim 12 wherein the processing device is further configured to detect the information corresponding to the plurality of image layers in the frame buffers within a predetermined time interval. 14. The system of claim 13 wherein the predetermined time interval is based on a refresh rate of a display screen which depicts a composition result. 15. The system of claim 12 wherein the processing device is further configured to compose the plurality of image layers based on the information and on the at least one computing device of the plurality of computing devices that is detected to have the available resources. 16. The method of claim 1 further comprising detecting whether the plurality of image layers are required to be composed prior to detecting whether the plurality of computing devices of the computing system have the available resources. 17. The method of claim 16 wherein detecting whether the plurality of image layers are required to be composed further comprises detecting information corresponding to the plurality of image layers in frame buffers. 18. The method of claim 17 wherein detecting the information corresponding to the plurality of image layers in the frame buffers further comprises detecting the information corresponding to the plurality of image layers in the frame buffers within a predetermined time interval. 19. The system of claim 17 further comprising composing the plurality of image layers based on the information and on the at least one computing device of the plurality of computing devices that is detected to have the available resources. 20. A non-transitory computer readable medium that includes a computer program executable by a processor for controlling composition of a plurality of image layers, the computer readable medium comprising:
instructions to detect whether a plurality of computing devices of a computing system have available resources for performing image layer composition; and instructions to control at least one computing device of the plurality of computing devices detected having the available resource for performing image layer composition to compose the plurality of image layers. | A method and a system for image layer composition are provided. The method may include detecting whether a plurality of computing devices of a computing system have available resource for performing image layer composition. The method also includes receiving a plurality of image layers and controlling at least one computing device detected having available resource for performing image layer composition to compose the plurality of image layers. Composition efficiency may be improved.1. A method for image layer composition, comprising:
detecting whether a plurality of computing devices of a computing system have available resources for performing image layer composition; receiving a plurality of image layers; and controlling at least one computing device detected having the available resources for performing image layer composition to compose the plurality of image layers. 2. The method according to claim 1, wherein the plurality of computing devices of the computing system comprise a graphic processing unit, a central processing unit and a display controller. 3. The method according to claim 1, wherein the plurality of computing devices of the computing system comprise at least two different types of devices. 4. The method according to claim 1, further comprising: determining computation capability required to compose the plurality of image layers, where the at least one computing device is controlled to compose the plurality of image layers based on the determination of the computation capability required to compose the plurality of image layers. 5. The method according to claim 1, wherein if one or more computing devices is detected having the available resources, the one or more computing devices detected having the available resources is controlled to compose the plurality of image layers based on composition speeds and computation loads. 6. A system for image layer composition, comprising a plurality of computing devices and a processing device, wherein the processing device is configured to:
detect whether the plurality of computing devices have available resources for performing image layer composition; and control at least one computing device of the plurality of computing devices detected having the available resources for performing image layer composition to compose a plurality of image layers. 7. The system according to claim 6, wherein the plurality of computing devices of the system comprise a graphic processing unit, a central processing unit and a display controller. 8. The system according to claim 6, wherein the plurality of computing devices of the system comprise at least two different types of devices. 9. The system according to claim 6, wherein the processing device is further configured to: determine computation capability required to compose the plurality of image layers, and control the at least one computing device detected having the available resources for performing image layer composition to compose the plurality of image layers based on the determination of computation capability required to compose the plurality of image layers. 10. The system according to claim 6, wherein if one or more computing devices is detected to have the available resources, the processing device is configured to control the one or more computing devices detected having the available resources to compose the plurality of image layers based on composition speeds and computation loads. 11. The system of claim 6 wherein the processing device is further configured to detect whether the plurality of image layers are required to be composed prior to detecting whether the plurality of computing devices have available resources available. 12. The system of claim 11 wherein the processing device is further configured to detect whether the plurality of image layers are required to be composed by detecting information corresponding to the plurality of image layers in frame buffers. 13. The system of claim 12 wherein the processing device is further configured to detect the information corresponding to the plurality of image layers in the frame buffers within a predetermined time interval. 14. The system of claim 13 wherein the predetermined time interval is based on a refresh rate of a display screen which depicts a composition result. 15. The system of claim 12 wherein the processing device is further configured to compose the plurality of image layers based on the information and on the at least one computing device of the plurality of computing devices that is detected to have the available resources. 16. The method of claim 1 further comprising detecting whether the plurality of image layers are required to be composed prior to detecting whether the plurality of computing devices of the computing system have the available resources. 17. The method of claim 16 wherein detecting whether the plurality of image layers are required to be composed further comprises detecting information corresponding to the plurality of image layers in frame buffers. 18. The method of claim 17 wherein detecting the information corresponding to the plurality of image layers in the frame buffers further comprises detecting the information corresponding to the plurality of image layers in the frame buffers within a predetermined time interval. 19. The system of claim 17 further comprising composing the plurality of image layers based on the information and on the at least one computing device of the plurality of computing devices that is detected to have the available resources. 20. A non-transitory computer readable medium that includes a computer program executable by a processor for controlling composition of a plurality of image layers, the computer readable medium comprising:
instructions to detect whether a plurality of computing devices of a computing system have available resources for performing image layer composition; and instructions to control at least one computing device of the plurality of computing devices detected having the available resource for performing image layer composition to compose the plurality of image layers. | 2,600 |
9,763 | 9,763 | 13,658,794 | 2,616 | To provide information about geographic locations, an interactive 3D display of geolocated imagery is provided via a user interface of a computing device. A view of the geolocated imagery is generated from a perspective of a notational camera having a particular camera pose, where the camera pose is associated with at least position and orientation. A selection of a location within the interactive display is received via the user interface, and a symbolic location corresponding to the selected location is automatically identified, where at least textual information is available for the symbolic location. Automatically and without further input via the user interface, (i) the notational camera is moved toward the selected location, and (ii) overlaid textual description of the symbolic location that includes a link to additional information related to the symbolic location is provided. | 1. A method in a computing device for providing information about geographic locations, the method comprising:
providing, using one or more processors, an interactive three-dimensional (3D) display of geolocated imagery for a geographic area via a user interface of the computing device, including generating a view of the geolocated imagery from a perspective of a notional camera having a particular camera pose, wherein the camera pose is associated with at least position and orientation; receiving, via the user interface, a selection of a location within the interactive display; automatically identifying a symbolic location corresponding to the selected location, wherein at least textual information is available for the symbolic location; automatically and without further input via the user interface, (i) moving the notional camera so as to directly face the selected location, and (ii) providing overlaid textual description of the symbolic location that includes a link to additional information related to the symbolic location. 2. The method of claim 1, wherein providing the overlaid textual description includes displaying a window with a search term input box prefilled with a search term associated with the symbolic location. 3. The method of claim 1, wherein providing the overlaid textual description includes
displaying an expandable informational window with textual description, and in response to a user activating the expandable informational window, displaying an expanded informational window with a search term input box prefilled with a search term associated with the symbolic location. 4. The method of claim 1, wherein the symbolic location corresponds to a landmark structure or a landmark natural formation. 5. The method of claim 1, wherein providing the overlaid textual description of the symbolic location includes:
identifying, at the computing device, a selected image from among a plurality of images that make up the geolocated imagery, wherein the identified image includes a tag identifying the symbolic location; sending, via a communication network, the tag to a group of one or more servers, and receiving, via a communication network, the textual description from the group of servers. 6. The method of claim 5, wherein the tag further identifies a pose of a camera with which the image was captured, wherein the pose includes position and orientation. 7. The method of claim 1, wherein identifying the symbolic location includes sending a portion of the geolocated imagery associated with the selected location to a group of one or more servers. 8. A method in a network device for efficiently providing information about locations displayed via a map application, the method comprising:
receiving, from a client device via a communication network, an indication of a camera position corresponding to a photographic image being displayed on the client device via a map application, wherein the camera position is moved so as to directly face the photographic image; automatically determining a symbolic location corresponding to the photographic image based on the received indication of the camera position; and providing, to the client computer, a textual description of the symbolic location and search links related to the symbolic location for use at the client device to display the textual description and search links in an overlay layer of the map application. 9. The method of claim 8, further comprising receiving an indication of a type of map with which the photographic image is being displayed. 10. The method of claim 8, wherein receiving the indication of the camera position includes receiving one or more of:
(i) latitude and longitude of the camera, (ii) orientation of the camera, and (iii) camera frustum. 11. The method of claim 8, further comprising receiving a tag identifying the symbolic location depicted in the image from the client device. 12. The method of claim 8, further comprising:
performing, with the server, an Internet search of the symbolic location; receiving, with the server, one or more results from the Internet search; selecting, with the server, a representative text description of the symbolic location; preparing, with the server, one or more links to at least one popular search term associated with the symbolic location; and storing the representative text description and the one or more links at a computer memory accessible by the server. 13. The method of claim 12, wherein the providing the textual description comprises providing the representative text description and the one or more links stored at the computer memory. 14. A computing device comprising:
one or more processors; a computer-readable memory coupled to the one or more processors; a network interface configured to transmit and receive data via a communication network; a user interface configured to display images and receive user input; a plurality of instructions stored in the computer-readable memory that, when executed by the one or more processors, causes the computing device to:
provide an interactive display of geolocated imagery for a geographic area via the user interface,
receive, via the user interface, a selection of a location within the interactive display,
automatically identify a symbolic location corresponding to the geolocated imagery at the selected location, and
automatically and without further input via the user interface, update the interactive display to organize the geolocated imagery so as to directly face the subject and provide overlaid textual description of the identified subject including an interactive link to additional information. 15. The computing device of claim 14, wherein the plurality of instructions provide a 3D display of geolocated imagery and implement a set of controls for navigating the 3D display. 16. The computing device of claim 14, wherein the plurality of instructions, when executed by the one or more processors, further cause the computing device to display a window with a search term input box prefilled with a search term associated with the symbolic location. 17. The computing device of claim 14, wherein the plurality of instructions, when executed by the one or more processors, cause the computing device to:
display a compact informational window with textual description, and in response to a user activating the expandable informational window, display an expanded informational window with a search term input box prefilled with a search term associated with the symbolic location. 18. The computing device of claim 14, wherein the symbolic location corresponds to a landmark structure or a landmark natural formation. 19. The computing device of claim 14, to provide the overlaid textual description of the symbolic location, the plurality of instructions are configured to:
identify, at the computing device, a selected image from among a plurality of images that make up the geolocated imagery, wherein the identified image includes a tag identifying the symbolic location; send, via the communication network, the tag to a group of one or more servers, and receive, via a communication network, the textual description from the group of servers. 20. The computing device of claim 18, wherein the tag further identifies a pose of a camera with which the image was captured, wherein the pose includes position and orientation. | To provide information about geographic locations, an interactive 3D display of geolocated imagery is provided via a user interface of a computing device. A view of the geolocated imagery is generated from a perspective of a notational camera having a particular camera pose, where the camera pose is associated with at least position and orientation. A selection of a location within the interactive display is received via the user interface, and a symbolic location corresponding to the selected location is automatically identified, where at least textual information is available for the symbolic location. Automatically and without further input via the user interface, (i) the notational camera is moved toward the selected location, and (ii) overlaid textual description of the symbolic location that includes a link to additional information related to the symbolic location is provided.1. A method in a computing device for providing information about geographic locations, the method comprising:
providing, using one or more processors, an interactive three-dimensional (3D) display of geolocated imagery for a geographic area via a user interface of the computing device, including generating a view of the geolocated imagery from a perspective of a notional camera having a particular camera pose, wherein the camera pose is associated with at least position and orientation; receiving, via the user interface, a selection of a location within the interactive display; automatically identifying a symbolic location corresponding to the selected location, wherein at least textual information is available for the symbolic location; automatically and without further input via the user interface, (i) moving the notional camera so as to directly face the selected location, and (ii) providing overlaid textual description of the symbolic location that includes a link to additional information related to the symbolic location. 2. The method of claim 1, wherein providing the overlaid textual description includes displaying a window with a search term input box prefilled with a search term associated with the symbolic location. 3. The method of claim 1, wherein providing the overlaid textual description includes
displaying an expandable informational window with textual description, and in response to a user activating the expandable informational window, displaying an expanded informational window with a search term input box prefilled with a search term associated with the symbolic location. 4. The method of claim 1, wherein the symbolic location corresponds to a landmark structure or a landmark natural formation. 5. The method of claim 1, wherein providing the overlaid textual description of the symbolic location includes:
identifying, at the computing device, a selected image from among a plurality of images that make up the geolocated imagery, wherein the identified image includes a tag identifying the symbolic location; sending, via a communication network, the tag to a group of one or more servers, and receiving, via a communication network, the textual description from the group of servers. 6. The method of claim 5, wherein the tag further identifies a pose of a camera with which the image was captured, wherein the pose includes position and orientation. 7. The method of claim 1, wherein identifying the symbolic location includes sending a portion of the geolocated imagery associated with the selected location to a group of one or more servers. 8. A method in a network device for efficiently providing information about locations displayed via a map application, the method comprising:
receiving, from a client device via a communication network, an indication of a camera position corresponding to a photographic image being displayed on the client device via a map application, wherein the camera position is moved so as to directly face the photographic image; automatically determining a symbolic location corresponding to the photographic image based on the received indication of the camera position; and providing, to the client computer, a textual description of the symbolic location and search links related to the symbolic location for use at the client device to display the textual description and search links in an overlay layer of the map application. 9. The method of claim 8, further comprising receiving an indication of a type of map with which the photographic image is being displayed. 10. The method of claim 8, wherein receiving the indication of the camera position includes receiving one or more of:
(i) latitude and longitude of the camera, (ii) orientation of the camera, and (iii) camera frustum. 11. The method of claim 8, further comprising receiving a tag identifying the symbolic location depicted in the image from the client device. 12. The method of claim 8, further comprising:
performing, with the server, an Internet search of the symbolic location; receiving, with the server, one or more results from the Internet search; selecting, with the server, a representative text description of the symbolic location; preparing, with the server, one or more links to at least one popular search term associated with the symbolic location; and storing the representative text description and the one or more links at a computer memory accessible by the server. 13. The method of claim 12, wherein the providing the textual description comprises providing the representative text description and the one or more links stored at the computer memory. 14. A computing device comprising:
one or more processors; a computer-readable memory coupled to the one or more processors; a network interface configured to transmit and receive data via a communication network; a user interface configured to display images and receive user input; a plurality of instructions stored in the computer-readable memory that, when executed by the one or more processors, causes the computing device to:
provide an interactive display of geolocated imagery for a geographic area via the user interface,
receive, via the user interface, a selection of a location within the interactive display,
automatically identify a symbolic location corresponding to the geolocated imagery at the selected location, and
automatically and without further input via the user interface, update the interactive display to organize the geolocated imagery so as to directly face the subject and provide overlaid textual description of the identified subject including an interactive link to additional information. 15. The computing device of claim 14, wherein the plurality of instructions provide a 3D display of geolocated imagery and implement a set of controls for navigating the 3D display. 16. The computing device of claim 14, wherein the plurality of instructions, when executed by the one or more processors, further cause the computing device to display a window with a search term input box prefilled with a search term associated with the symbolic location. 17. The computing device of claim 14, wherein the plurality of instructions, when executed by the one or more processors, cause the computing device to:
display a compact informational window with textual description, and in response to a user activating the expandable informational window, display an expanded informational window with a search term input box prefilled with a search term associated with the symbolic location. 18. The computing device of claim 14, wherein the symbolic location corresponds to a landmark structure or a landmark natural formation. 19. The computing device of claim 14, to provide the overlaid textual description of the symbolic location, the plurality of instructions are configured to:
identify, at the computing device, a selected image from among a plurality of images that make up the geolocated imagery, wherein the identified image includes a tag identifying the symbolic location; send, via the communication network, the tag to a group of one or more servers, and receive, via a communication network, the textual description from the group of servers. 20. The computing device of claim 18, wherein the tag further identifies a pose of a camera with which the image was captured, wherein the pose includes position and orientation. | 2,600 |
9,764 | 9,764 | 15,156,658 | 2,616 | Methods, systems and non-transitory computer readable media are described. A system includes a shader pipe array, a redundant shader pipe array, a sequencer and a redundant shader switch. The shader pipe array includes multiple shader pipes, each of which perform rendering calculations on data provided thereto. The redundant shader pipe array also performs rendering calculations on data provided thereto. The sequencer identifies at least one defective shader pipe in the shader pipe array, and, in response, generates a signal. The redundant shader switch receives the generated signal, and, in response, transfers the data destined for each shader pipe identified as being defective independently to the redundant shader pipe array. | 1. A system comprising:
a shader pipe array comprising a plurality of shader pipes, each of the plurality of shader pipes being configured to perform rendering calculations on data provided thereto; a redundant shader pipe array configured to perform rendering calculations on data provided thereto; a sequencer configured to identify at least one defective shader pipe of the plurality of shader pipes in the shader pipe array, and, in response to identifying the at least one defective shader pipe, generate a signal; and a redundant shader switch configured to:
receive the generated signal, and
in response to receiving the generated signal, transfer the data destined for each shader pipe identified as being defective independently to the redundant shader pipe array. 2. The system of claim 1, wherein the redundant shader switch is further configured to transfer the data destined for each shader pipe identified as being defective without transferring the data destined to all other shader pipes in the shader pipe array that were not identified as being defective. 3. The system of claim 1, wherein the redundant shader switch is further configured to directly switch the data destined for each shader pipe identified as being defective via at least one horizontal path to the redundant shader pipe array. 4. The system of claim 1, wherein:
the shader pipe array further comprises a plurality of vertical shader pipe columns, each of the plurality of vertical shader pipe columns comprising at least one of the plurality of shader pipes, and the redundant shader switch further comprises a plurality of delay buffers and a plurality of output buffers, each of the plurality of output buffers being coupled to a respective one of the plurality of delay buffers and aligned with a respective vertical shader pipe column. 5. The system of claim 4, wherein:
the plurality of delay buffers are configured to contain data output of the corresponding vertical shader pipe columns for a sufficient period of time to allow a result for the redundant shader pipe array to be re-aligned, and an output buffer of the plurality of output buffers coupled to the at least one defective shader pipe is configured to receive the result from the redundant shader pipe array. 6. The system of claim 1, wherein the redundant shader pipe array is configured to receive data to be processed from sources other than via transfer from the redundant shader switch on a condition that at least one of the sequencer does not identify any shader pipes in the shader pipe array as being defective or the sequencer identifies the at least one of the shader pipes in the shader pipe array as being intermittently defective. 7. A method, implemented in a sequencer, the method comprising:
identifying at least one defective shader pipe of a plurality of shader pipes in a shader pipe array; and generating a signal directing a redundant shader switch to transfer data destined for each shader pipe identified as being defective independently to a redundant shader pipe array. 8. The method of claim 7, wherein the generating further comprises generating the signal directing the redundant shader switch to transfer the data destined for each shader pipe identified as being defective without transferring the data destined to all other shader pipes in the shader pipe array that were not identified as being defective. 9. A non-transitory computer readable medium carrying one or more sequences of one or more instructions for execution by one or more processors to perform operations, comprising:
identifying at least one defective shader pipe of a plurality of shader pipes in a shader pipe array; and generating a signal directing a redundant shader switch to transfer data destined for each shader pipe identified as being defective independently to a redundant shader pipe array. 10. The non-transitory computer readable medium of claim 9, wherein the one or more instructions for generating the signal further comprise one or more instructions to generate the signal directing the redundant shader switch to transfer the data destined for each shader pipe identified as being defective without transferring the data destined to all other shader pipes in the shader pipe array that were not identified as being defective. | Methods, systems and non-transitory computer readable media are described. A system includes a shader pipe array, a redundant shader pipe array, a sequencer and a redundant shader switch. The shader pipe array includes multiple shader pipes, each of which perform rendering calculations on data provided thereto. The redundant shader pipe array also performs rendering calculations on data provided thereto. The sequencer identifies at least one defective shader pipe in the shader pipe array, and, in response, generates a signal. The redundant shader switch receives the generated signal, and, in response, transfers the data destined for each shader pipe identified as being defective independently to the redundant shader pipe array.1. A system comprising:
a shader pipe array comprising a plurality of shader pipes, each of the plurality of shader pipes being configured to perform rendering calculations on data provided thereto; a redundant shader pipe array configured to perform rendering calculations on data provided thereto; a sequencer configured to identify at least one defective shader pipe of the plurality of shader pipes in the shader pipe array, and, in response to identifying the at least one defective shader pipe, generate a signal; and a redundant shader switch configured to:
receive the generated signal, and
in response to receiving the generated signal, transfer the data destined for each shader pipe identified as being defective independently to the redundant shader pipe array. 2. The system of claim 1, wherein the redundant shader switch is further configured to transfer the data destined for each shader pipe identified as being defective without transferring the data destined to all other shader pipes in the shader pipe array that were not identified as being defective. 3. The system of claim 1, wherein the redundant shader switch is further configured to directly switch the data destined for each shader pipe identified as being defective via at least one horizontal path to the redundant shader pipe array. 4. The system of claim 1, wherein:
the shader pipe array further comprises a plurality of vertical shader pipe columns, each of the plurality of vertical shader pipe columns comprising at least one of the plurality of shader pipes, and the redundant shader switch further comprises a plurality of delay buffers and a plurality of output buffers, each of the plurality of output buffers being coupled to a respective one of the plurality of delay buffers and aligned with a respective vertical shader pipe column. 5. The system of claim 4, wherein:
the plurality of delay buffers are configured to contain data output of the corresponding vertical shader pipe columns for a sufficient period of time to allow a result for the redundant shader pipe array to be re-aligned, and an output buffer of the plurality of output buffers coupled to the at least one defective shader pipe is configured to receive the result from the redundant shader pipe array. 6. The system of claim 1, wherein the redundant shader pipe array is configured to receive data to be processed from sources other than via transfer from the redundant shader switch on a condition that at least one of the sequencer does not identify any shader pipes in the shader pipe array as being defective or the sequencer identifies the at least one of the shader pipes in the shader pipe array as being intermittently defective. 7. A method, implemented in a sequencer, the method comprising:
identifying at least one defective shader pipe of a plurality of shader pipes in a shader pipe array; and generating a signal directing a redundant shader switch to transfer data destined for each shader pipe identified as being defective independently to a redundant shader pipe array. 8. The method of claim 7, wherein the generating further comprises generating the signal directing the redundant shader switch to transfer the data destined for each shader pipe identified as being defective without transferring the data destined to all other shader pipes in the shader pipe array that were not identified as being defective. 9. A non-transitory computer readable medium carrying one or more sequences of one or more instructions for execution by one or more processors to perform operations, comprising:
identifying at least one defective shader pipe of a plurality of shader pipes in a shader pipe array; and generating a signal directing a redundant shader switch to transfer data destined for each shader pipe identified as being defective independently to a redundant shader pipe array. 10. The non-transitory computer readable medium of claim 9, wherein the one or more instructions for generating the signal further comprise one or more instructions to generate the signal directing the redundant shader switch to transfer the data destined for each shader pipe identified as being defective without transferring the data destined to all other shader pipes in the shader pipe array that were not identified as being defective. | 2,600 |
9,765 | 9,765 | 15,154,894 | 2,647 | Techniques described herein may be used to conserver battery power of an Internet of Things (IoT) tracker by increasing the overall amount of time that the IoT tracker is in a battery conservation mode (a sleep mode, a Power Save Mode (PSM), etc.). A IoT tracker may implement a battery conservation policy that may include instructions that cause the IoT tracker to monitor certain conditions, determine when the conditions satisfy a particular trigger, and implement a battery conservation mode in response to those conditions. Examples of such conditions may include (1) the IoT tracker being close to a user device designated to track the location of the IoT tracker, (2) identifying that a current time and day are associated with a pre-selected schedule for disabling tracking services, (3) the IoT tracker being located within a particular geographic area, and more. | 1. A tracker device, comprising circuitry to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device to a wireless telecommunications network to which the tracker device is connected; receive a policy relating to conditions for entering into a battery conservation mode in response to a specified trigger, the specified trigger including the tracker device being within a particular distance of a user device configured to monitor the geographic location of the tracker device, and the battery conservation mode including a mode of operation where the tracker device conserves battery power, of the tracker device, by refraining from determining the geographic location of the tracker device and from communicating the geographic location to the wireless telecommunications network; monitor conditions corresponding to the specified trigger; determine that the monitored conditions have satisfied the specified trigger; and enter, in response to a determination that the monitored conditions have satisfied the specified trigger, into the battery conservation mode. 2. The tracker device of claim 1, wherein, in response to entering into the battery conservation mode, the circuitry is to:
notify the user device that the tracker device has entered into the power conservation mode. 3. The tracker device of claim 1, wherein the monitored conditions include a date and time during which the tracker device is geographically stationary. 4. The tracker device of claim 3, wherein the monitored conditions further include a particular geographic location associated with the date and time. 5. The tracker device of claim 1, wherein the monitored conditions include whether a current geographic location of the tracker device is inside, or outside, of a particular geographic area. 6. The tracker device of claim 1, wherein the circuitry is to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device in response to a command from a user of the tracker device. 7. The tracker device of claim 1, wherein the monitored conditions include an indication, from an accelerometer of the tracker device, that the tracker device is moving. 8. The tracker device of claim 1, wherein the circuitry is to:
analyze a schedule corresponding to the tracker device; identify dates and times when the tracker device should be geographically stationary; and generate the policy relating to conditions for entering into the battery conservation mode in response to the dates and times when the tracker device should be geographically stationary. 9. A method, comprising:
determining, by an tracker device, a geographical location of the tracker device and communicating the geographic location of the tracker device to a wireless telecommunications network to which the tracker device is connected; receiving, by the tracker device, a policy relating to conditions for entering into a battery conservation mode in response to a specified trigger, the specified trigger including the tracker device being within a particular distance of a user device configured to monitor the geographic location of the tracker device, and the battery conservation mode including a mode of operation where the tracker device conserves battery power, of the tracker device, by refraining from determining the geographic location of the tracker device and from communicating the geographic location to the wireless telecommunications network; monitoring, by the tracker device, conditions corresponding to the specified trigger; determining, by the tracker device, that the monitored conditions have satisfied the specified trigger; and entering, by the tracker device and in response to determining that the monitored conditions have satisfied the specified trigger, into the battery conservation mode. 10. The method of claim 9, further comprising:
in response to entering into the battery conservation mode,
notifying the user device that the tracker device has entered into the power conservation mode. 11. The method of claim 9, wherein the monitored conditions include a date and time during which the tracker device is geographically stationary. 12. The method of claim 11, wherein the monitored conditions further include a particular geographic location associated with the date and time. 13. The method of claim 9, wherein the monitored conditions include whether a current geographic location of the tracker device is inside, or outside, of a particular geographic area. 14. The method of claim 9, wherein the circuitry is to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device in response to a command from a user of the tracker device. 15. The method of claim 9, wherein the monitored conditions include an indication, from an accelerometer of the tracker device, that the tracker device is moving. 16. The method of claim 9, further comprising:
analyze a schedule corresponding to the tracker device; identify dates and times when the tracker device should be geographically stationary; and generate the policy relating to conditions for entering into the battery conservation mode in response to the dates and times when the tracker device should be geographically stationary. 17. A non-transitory, computer readable medium storing a plurality of processor-executable instructions, wherein executing the processor-executable instructions cause one or more processors to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device to a wireless telecommunications network to which the tracker device is connected; receive a policy relating to conditions for entering into a battery conservation mode in response to a specified trigger, the specified trigger including the tracker device being within a particular distance of a user device configured to monitor the geographic location of the tracker device, and the battery conservation mode including a mode of operation where the tracker device conserves battery power, of the tracker device, by refraining from determining the geographic location of the tracker device and from communicating the geographic location to the wireless telecommunications network; monitor conditions corresponding to the specified trigger; determine that the monitored conditions have satisfied the specified trigger; and enter, in response to a determination that the monitored conditions have satisfied the specified trigger, into the battery conservation mode. 18. The non-transitory, computer readable medium of claim 17, wherein, in response to entering into the battery conservation mode, the processor-executable instructions cause the one or more processors to:
notify the user device that the tracker device has entered into the power conservation mode. 19. The non-transitory, computer readable medium of claim 17, wherein the monitored conditions include a date and time during which the tracker device is geographically stationary. 20. The non-transitory, computer readable medium of claim 19, wherein the monitored conditions further include a particular geographic location associated with the date and time. | Techniques described herein may be used to conserver battery power of an Internet of Things (IoT) tracker by increasing the overall amount of time that the IoT tracker is in a battery conservation mode (a sleep mode, a Power Save Mode (PSM), etc.). A IoT tracker may implement a battery conservation policy that may include instructions that cause the IoT tracker to monitor certain conditions, determine when the conditions satisfy a particular trigger, and implement a battery conservation mode in response to those conditions. Examples of such conditions may include (1) the IoT tracker being close to a user device designated to track the location of the IoT tracker, (2) identifying that a current time and day are associated with a pre-selected schedule for disabling tracking services, (3) the IoT tracker being located within a particular geographic area, and more.1. A tracker device, comprising circuitry to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device to a wireless telecommunications network to which the tracker device is connected; receive a policy relating to conditions for entering into a battery conservation mode in response to a specified trigger, the specified trigger including the tracker device being within a particular distance of a user device configured to monitor the geographic location of the tracker device, and the battery conservation mode including a mode of operation where the tracker device conserves battery power, of the tracker device, by refraining from determining the geographic location of the tracker device and from communicating the geographic location to the wireless telecommunications network; monitor conditions corresponding to the specified trigger; determine that the monitored conditions have satisfied the specified trigger; and enter, in response to a determination that the monitored conditions have satisfied the specified trigger, into the battery conservation mode. 2. The tracker device of claim 1, wherein, in response to entering into the battery conservation mode, the circuitry is to:
notify the user device that the tracker device has entered into the power conservation mode. 3. The tracker device of claim 1, wherein the monitored conditions include a date and time during which the tracker device is geographically stationary. 4. The tracker device of claim 3, wherein the monitored conditions further include a particular geographic location associated with the date and time. 5. The tracker device of claim 1, wherein the monitored conditions include whether a current geographic location of the tracker device is inside, or outside, of a particular geographic area. 6. The tracker device of claim 1, wherein the circuitry is to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device in response to a command from a user of the tracker device. 7. The tracker device of claim 1, wherein the monitored conditions include an indication, from an accelerometer of the tracker device, that the tracker device is moving. 8. The tracker device of claim 1, wherein the circuitry is to:
analyze a schedule corresponding to the tracker device; identify dates and times when the tracker device should be geographically stationary; and generate the policy relating to conditions for entering into the battery conservation mode in response to the dates and times when the tracker device should be geographically stationary. 9. A method, comprising:
determining, by an tracker device, a geographical location of the tracker device and communicating the geographic location of the tracker device to a wireless telecommunications network to which the tracker device is connected; receiving, by the tracker device, a policy relating to conditions for entering into a battery conservation mode in response to a specified trigger, the specified trigger including the tracker device being within a particular distance of a user device configured to monitor the geographic location of the tracker device, and the battery conservation mode including a mode of operation where the tracker device conserves battery power, of the tracker device, by refraining from determining the geographic location of the tracker device and from communicating the geographic location to the wireless telecommunications network; monitoring, by the tracker device, conditions corresponding to the specified trigger; determining, by the tracker device, that the monitored conditions have satisfied the specified trigger; and entering, by the tracker device and in response to determining that the monitored conditions have satisfied the specified trigger, into the battery conservation mode. 10. The method of claim 9, further comprising:
in response to entering into the battery conservation mode,
notifying the user device that the tracker device has entered into the power conservation mode. 11. The method of claim 9, wherein the monitored conditions include a date and time during which the tracker device is geographically stationary. 12. The method of claim 11, wherein the monitored conditions further include a particular geographic location associated with the date and time. 13. The method of claim 9, wherein the monitored conditions include whether a current geographic location of the tracker device is inside, or outside, of a particular geographic area. 14. The method of claim 9, wherein the circuitry is to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device in response to a command from a user of the tracker device. 15. The method of claim 9, wherein the monitored conditions include an indication, from an accelerometer of the tracker device, that the tracker device is moving. 16. The method of claim 9, further comprising:
analyze a schedule corresponding to the tracker device; identify dates and times when the tracker device should be geographically stationary; and generate the policy relating to conditions for entering into the battery conservation mode in response to the dates and times when the tracker device should be geographically stationary. 17. A non-transitory, computer readable medium storing a plurality of processor-executable instructions, wherein executing the processor-executable instructions cause one or more processors to:
determine a geographical location of the tracker device and communicate the geographic location of the tracker device to a wireless telecommunications network to which the tracker device is connected; receive a policy relating to conditions for entering into a battery conservation mode in response to a specified trigger, the specified trigger including the tracker device being within a particular distance of a user device configured to monitor the geographic location of the tracker device, and the battery conservation mode including a mode of operation where the tracker device conserves battery power, of the tracker device, by refraining from determining the geographic location of the tracker device and from communicating the geographic location to the wireless telecommunications network; monitor conditions corresponding to the specified trigger; determine that the monitored conditions have satisfied the specified trigger; and enter, in response to a determination that the monitored conditions have satisfied the specified trigger, into the battery conservation mode. 18. The non-transitory, computer readable medium of claim 17, wherein, in response to entering into the battery conservation mode, the processor-executable instructions cause the one or more processors to:
notify the user device that the tracker device has entered into the power conservation mode. 19. The non-transitory, computer readable medium of claim 17, wherein the monitored conditions include a date and time during which the tracker device is geographically stationary. 20. The non-transitory, computer readable medium of claim 19, wherein the monitored conditions further include a particular geographic location associated with the date and time. | 2,600 |
9,766 | 9,766 | 14,436,045 | 2,675 | A computer-implemented method at an electronic device, the method comprising: receiving plain text from one of the plurality of software applications; processing the text into compressed text, while maintaining comprehensibility of the compressed text; and returning the compressed text to the one of the plurality of software application. | 1-14. (canceled) 15. A computer program on an electronic device for processing text, the computer program running as a background service in communication with a plurality of communication applications on the electronic device, and performing the method of:
receiving plain text from one of the plurality of software applications; processing the text into compressed text, while maintaining comprehensibility of the compressed text; and returning the compressed text to the one of the plurality of software applications. 16. The computer program of claim 15, wherein the compressed text is text speak. 17. The computer program of claim 15, wherein the step of processing the text into compressed text includes accessing a dictionary comprising one or more mappings of groups of text characters to compressed text. 18. The computer program of claim 17, wherein the dictionary is stored locally on the electronic device. 19. The computer program of claim 18, wherein the dictionary further comprises custom mappings added by the electronic device. 20. The computer program of claim 15, wherein the processing the text into compressed text is ceased when the compressed text falls below a predetermined size. 21. The computer program of claim 20, wherein the predetermined size is the string length. 22. The computer program of claim 17, wherein the processing of text comprises:
splitting the received text into blocks of text, said splitting determined by the positions of space characters in the received text; and for each block of text, if it the block of text is determined to have a mapping in the dictionary, replacing it with the corresponding compressed text. 23. The computer program of claim 22, wherein the processing of text further comprises:
for each block of text, if the block of text is determined to be an excluded block of text, skipping the step of determining if the block of text has a mapping in the dictionary. 24. The computer program of claim 15, wherein the compressed text uses less memory than the received text. 25. The computer program of claim 15, the method further comprising:
detecting that the received text is compressed text and processing the text into uncompressed text. 26. The computer program of claim 15, wherein the plurality of communication applications include one or more of: a text messaging application, a Twitter® application, an email application, and a social networking application. 27. An electronic device comprising:
one or more processors; and, memory comprising instructions which, when executed by one or more of the processors, cause the device to run a background service in communication with a plurality of communication applications on the electronic device, and to: receive plain text from one of the plurality of software applications;
process the text into compressed text, while maintaining comprehensibility of the compressed text; and
return the compressed text to the one of the plurality of software applications. 28. The electronic device of claim 27, wherein the compressed text is text speak. 29. The electronic device of claim 27, wherein the step of processing the text into compressed text includes accessing a dictionary comprising one or more mappings of groups of text characters to compressed text. 30. The electronic device of claim 29, wherein the dictionary is stored locally on the electronic device. 31. The electronic device of claim 30, wherein the dictionary further comprises custom mappings added by the electronic device. 32. The electronic device of claim 27, wherein the processing the text into compressed text is ceased when the compressed text falls below a predetermined size. 33. The electronic device of claim 32, wherein the predetermined size is the string length. 34. The electronic device of claim 29, wherein the processing of text comprises:
splitting the received text into blocks of text, said splitting determined by the positions of space characters in the received text; and for each block of text, if it the block of text is determined to have a mapping in the dictionary, replacing it with the corresponding compressed text. 35. The electronic device of claim 34, wherein the processing of text further comprises:
for each block of text, if the block of text is determined to be an excluded block of text, skipping the step of determining if the block of text has a mapping in the dictionary. 36. The electronic device of claim 27, wherein the compressed text uses less memory than the received text. 37. The electronic device of claim 27, the method further comprising:
detecting that the received text is compressed text and processing the text into uncompressed text. 38. The electronic device of claim 27, wherein the plurality of communication applications include one or more of: a text messaging application, a Twitter® application, an email application, and a social networking application. 39. A non-transitory computer-readable medium comprising instructions which, when executed by one or more processors of an electronic device, cause the device to run a background service in communication with a plurality of communication applications on the electronic device, and to:
receive plain text from one of the plurality of software applications;
process the text into compressed text, while maintaining comprehensibility of the compressed text; and
return the compressed text to the one of the plurality of software applications. | A computer-implemented method at an electronic device, the method comprising: receiving plain text from one of the plurality of software applications; processing the text into compressed text, while maintaining comprehensibility of the compressed text; and returning the compressed text to the one of the plurality of software application.1-14. (canceled) 15. A computer program on an electronic device for processing text, the computer program running as a background service in communication with a plurality of communication applications on the electronic device, and performing the method of:
receiving plain text from one of the plurality of software applications; processing the text into compressed text, while maintaining comprehensibility of the compressed text; and returning the compressed text to the one of the plurality of software applications. 16. The computer program of claim 15, wherein the compressed text is text speak. 17. The computer program of claim 15, wherein the step of processing the text into compressed text includes accessing a dictionary comprising one or more mappings of groups of text characters to compressed text. 18. The computer program of claim 17, wherein the dictionary is stored locally on the electronic device. 19. The computer program of claim 18, wherein the dictionary further comprises custom mappings added by the electronic device. 20. The computer program of claim 15, wherein the processing the text into compressed text is ceased when the compressed text falls below a predetermined size. 21. The computer program of claim 20, wherein the predetermined size is the string length. 22. The computer program of claim 17, wherein the processing of text comprises:
splitting the received text into blocks of text, said splitting determined by the positions of space characters in the received text; and for each block of text, if it the block of text is determined to have a mapping in the dictionary, replacing it with the corresponding compressed text. 23. The computer program of claim 22, wherein the processing of text further comprises:
for each block of text, if the block of text is determined to be an excluded block of text, skipping the step of determining if the block of text has a mapping in the dictionary. 24. The computer program of claim 15, wherein the compressed text uses less memory than the received text. 25. The computer program of claim 15, the method further comprising:
detecting that the received text is compressed text and processing the text into uncompressed text. 26. The computer program of claim 15, wherein the plurality of communication applications include one or more of: a text messaging application, a Twitter® application, an email application, and a social networking application. 27. An electronic device comprising:
one or more processors; and, memory comprising instructions which, when executed by one or more of the processors, cause the device to run a background service in communication with a plurality of communication applications on the electronic device, and to: receive plain text from one of the plurality of software applications;
process the text into compressed text, while maintaining comprehensibility of the compressed text; and
return the compressed text to the one of the plurality of software applications. 28. The electronic device of claim 27, wherein the compressed text is text speak. 29. The electronic device of claim 27, wherein the step of processing the text into compressed text includes accessing a dictionary comprising one or more mappings of groups of text characters to compressed text. 30. The electronic device of claim 29, wherein the dictionary is stored locally on the electronic device. 31. The electronic device of claim 30, wherein the dictionary further comprises custom mappings added by the electronic device. 32. The electronic device of claim 27, wherein the processing the text into compressed text is ceased when the compressed text falls below a predetermined size. 33. The electronic device of claim 32, wherein the predetermined size is the string length. 34. The electronic device of claim 29, wherein the processing of text comprises:
splitting the received text into blocks of text, said splitting determined by the positions of space characters in the received text; and for each block of text, if it the block of text is determined to have a mapping in the dictionary, replacing it with the corresponding compressed text. 35. The electronic device of claim 34, wherein the processing of text further comprises:
for each block of text, if the block of text is determined to be an excluded block of text, skipping the step of determining if the block of text has a mapping in the dictionary. 36. The electronic device of claim 27, wherein the compressed text uses less memory than the received text. 37. The electronic device of claim 27, the method further comprising:
detecting that the received text is compressed text and processing the text into uncompressed text. 38. The electronic device of claim 27, wherein the plurality of communication applications include one or more of: a text messaging application, a Twitter® application, an email application, and a social networking application. 39. A non-transitory computer-readable medium comprising instructions which, when executed by one or more processors of an electronic device, cause the device to run a background service in communication with a plurality of communication applications on the electronic device, and to:
receive plain text from one of the plurality of software applications;
process the text into compressed text, while maintaining comprehensibility of the compressed text; and
return the compressed text to the one of the plurality of software applications. | 2,600 |
9,767 | 9,767 | 15,118,720 | 2,674 | Apparatuses, arrangements , and methods therein for generation of comfort noise are disclosed. In short, the solution relates to exploiting the spatial coherence of multiple input audio channels in order to generate high quality multi channel comfort noise. | 1. A method to for generation of comfort noise for at least two audio channels, the method comprising:
determining spectral characteristics of audio signals on at least two input audio channels; determining a spatial coherence between the audio signals on the respective input audio channels; and generating comfort noise for at least two output audio channels, based on the determined spectral characteristics and spatial coherence. 2. The method according to claim 1, wherein the determining and generation is performed by an echo canceller, or, where the determining is performed in a transmitting node, and the determined information is signaled from the transmitting node to a receiving node, where the comfort noise is generated. 3. (canceled) 4. The method according to claim 1, wherein the spatial coherence is determined by applying a coherence function on the audio signals on the at least two input audio channels. 5. The method according to claim 1, wherein the spatial coherence Cxy between two signals, x and y, of the at least two signals, is determined as: Cxy=|Sxy|2/(Sxx 2*Syy 2); where Sxy is the cross-spectral density between x and y, and Sxx and Syy is the autospectral density of x and y respectively. 6. The method according to claim 1, wherein the coherence is approximated as a cross-correlation between the audio signals on the respective input audio channels. 7. (canceled) 8. The method according to claim 1, wherein the generation of a comfort noise signal N_1 for an output audio channel comprises:
determining a spectral shaping function H_1, based on the information on spectral characteristics of one of the input audio signals and the spatial coherence between the input audio signal and at least another input audio signal; and
applying the spectral shaping function H_1 to a first random noise signal W_1 and on a second random noise signal W_2(f), where W_2(f) is weighted based on the coherence between the input audio signal and the at least another input audio signal. 9.-10. (canceled) 11. An arrangement for generation of comfort noise for at least two audio channels, the arrangement comprising at least one processor and at least one memory, said at least one memory containing instructions executable by said at least one processor, whereby the arrangement is operative to:
determine spectral characteristics of audio signals on at least two input audio channels; determine a spatial coherence between the audio signals on the respective input audio channels; and generate comfort noise for at least two output audio channels, based on the determined spectral characteristics and spatial coherence. 12. The arrangement according to claim 11, wherein the determining and generation is performed by an echo canceller, or, where the determining is performed in a transmitting node, and the determined information is signaled by the transmitting node to a receiving node, by which the comfort noise is generated. 13. (canceled) 14. The arrangement according to claim 1, wherein the spatial coherence is determined by applying a coherence function on a representation of the audio signals on the at least two input audio channels. 15. The arrangement according to claim 11, wherein the spatial coherence Cxy between two signals, x and y, of the at least two signals, is determined as: Cxy=|Sxy|2/(Sxx 2*Syy 2); where Sxy is the cross-spectral density between x and y, and Sxx and Syy is the autospectral density of x and y respectively. 16. The arrangement according to claim 11, wherein the coherence is approximated as a cross-correlation between the audio signals on the respective input audio channels. 17. (canceled) 18. Th e arrangement according to claim 11, wherein the generation of a comfort noise signal N_1 for an output audio channel comprises:
determining a spectral shaping function H_1, based on the information on spectral characteristics of one of the audio signals and the spatial coherence between the audio signal and at least another audio signal; and
applying the spectral shaping function H_1 to a first random noise signal W_1 and on a second random noise signal W_2(f), where W_2(f) is weighted based on the coherence between the audio signal and the at least another audio signal. 19. -22 (canceled) 23. User equipment comprising the arrangement according to claim 11. 24. User equipment according to claim 23, being operable in a wireless communication network. 25. A computer program comprising computer readable code, which when run in an arrangement causes the arrangement to perform the method according to claim 1. 26. A non-transitory computer program carrier comprising a computer program according to claim 25. 27.-30. (canceled) | Apparatuses, arrangements , and methods therein for generation of comfort noise are disclosed. In short, the solution relates to exploiting the spatial coherence of multiple input audio channels in order to generate high quality multi channel comfort noise.1. A method to for generation of comfort noise for at least two audio channels, the method comprising:
determining spectral characteristics of audio signals on at least two input audio channels; determining a spatial coherence between the audio signals on the respective input audio channels; and generating comfort noise for at least two output audio channels, based on the determined spectral characteristics and spatial coherence. 2. The method according to claim 1, wherein the determining and generation is performed by an echo canceller, or, where the determining is performed in a transmitting node, and the determined information is signaled from the transmitting node to a receiving node, where the comfort noise is generated. 3. (canceled) 4. The method according to claim 1, wherein the spatial coherence is determined by applying a coherence function on the audio signals on the at least two input audio channels. 5. The method according to claim 1, wherein the spatial coherence Cxy between two signals, x and y, of the at least two signals, is determined as: Cxy=|Sxy|2/(Sxx 2*Syy 2); where Sxy is the cross-spectral density between x and y, and Sxx and Syy is the autospectral density of x and y respectively. 6. The method according to claim 1, wherein the coherence is approximated as a cross-correlation between the audio signals on the respective input audio channels. 7. (canceled) 8. The method according to claim 1, wherein the generation of a comfort noise signal N_1 for an output audio channel comprises:
determining a spectral shaping function H_1, based on the information on spectral characteristics of one of the input audio signals and the spatial coherence between the input audio signal and at least another input audio signal; and
applying the spectral shaping function H_1 to a first random noise signal W_1 and on a second random noise signal W_2(f), where W_2(f) is weighted based on the coherence between the input audio signal and the at least another input audio signal. 9.-10. (canceled) 11. An arrangement for generation of comfort noise for at least two audio channels, the arrangement comprising at least one processor and at least one memory, said at least one memory containing instructions executable by said at least one processor, whereby the arrangement is operative to:
determine spectral characteristics of audio signals on at least two input audio channels; determine a spatial coherence between the audio signals on the respective input audio channels; and generate comfort noise for at least two output audio channels, based on the determined spectral characteristics and spatial coherence. 12. The arrangement according to claim 11, wherein the determining and generation is performed by an echo canceller, or, where the determining is performed in a transmitting node, and the determined information is signaled by the transmitting node to a receiving node, by which the comfort noise is generated. 13. (canceled) 14. The arrangement according to claim 1, wherein the spatial coherence is determined by applying a coherence function on a representation of the audio signals on the at least two input audio channels. 15. The arrangement according to claim 11, wherein the spatial coherence Cxy between two signals, x and y, of the at least two signals, is determined as: Cxy=|Sxy|2/(Sxx 2*Syy 2); where Sxy is the cross-spectral density between x and y, and Sxx and Syy is the autospectral density of x and y respectively. 16. The arrangement according to claim 11, wherein the coherence is approximated as a cross-correlation between the audio signals on the respective input audio channels. 17. (canceled) 18. Th e arrangement according to claim 11, wherein the generation of a comfort noise signal N_1 for an output audio channel comprises:
determining a spectral shaping function H_1, based on the information on spectral characteristics of one of the audio signals and the spatial coherence between the audio signal and at least another audio signal; and
applying the spectral shaping function H_1 to a first random noise signal W_1 and on a second random noise signal W_2(f), where W_2(f) is weighted based on the coherence between the audio signal and the at least another audio signal. 19. -22 (canceled) 23. User equipment comprising the arrangement according to claim 11. 24. User equipment according to claim 23, being operable in a wireless communication network. 25. A computer program comprising computer readable code, which when run in an arrangement causes the arrangement to perform the method according to claim 1. 26. A non-transitory computer program carrier comprising a computer program according to claim 25. 27.-30. (canceled) | 2,600 |
9,768 | 9,768 | 14,797,418 | 2,642 | The present invention provides a hand held electronic device, which includes a near field communication element located proximate a top of the device, that is adapted for being selectively enabled. The hand held electronic device further includes a user identification sensor, which in at least some instances is a further near field communication element, and is adapted for confirming the identity of an authorized user of the device. The user identification sensor is located in a user holding area at a back side surface of the device. The user identification sensor is adapted for sensing a user interaction in an area proximate the back side surface of the device and receiving as part of the interaction, user identification information corresponding to a particular user presently using the device, and determining whether the user identification information corresponding to the particular user matches identification information for a predetermined authorized user. Upon confirming the identity of an authorized user of the device by the user identification sensor, the near field communication element located proximate the top of the device is enabled. When the identity of an authorized user of the device is not confirmed by the user identification sensor, the near field communication element located at the top of the device is not enabled. | 1. A hand held electronic device comprising:
a near field communication element located proximate a top of the device, which is adapted for being selectively enabled; a user identification sensor adapted for confirming the identity of an authorized user of the device, the user identification sensor being located in a user holding area at a back side surface of the device, where the user identification sensor is adapted for sensing a user interaction in an area proximate the back side surface of the device and receiving as part of the interaction, user identification information corresponding to a particular user presently using the device, and determining whether the user identification information corresponding to the particular user matches identification information for a predetermined authorized user; wherein upon confirming the identity of an authorized user of the device by the user identification sensor, the near field communication element located proximate the top of the device is enabled, and wherein when the identity of an authorized user of the device is not confirmed by the user identification sensor, the near field communication element located at the top of the device is not enabled. 2. A hand held electronic device in accordance with claim 1, wherein the user identification sensor is a fingerprint sensor adapted for sensing a fingerprint of the particular user presently using the device, and comparing the sensed fingerprint with one or more fingerprints corresponding to one or more predetermined authorized users. 3. A hand held electronic device in accordance with claim 1, wherein the user identification sensor is a second near field communication element. 4. A hand held electronic device in accordance with claim 3, wherein the second near field communication element is adapted for reading a value from a radio frequency identification tag. 5. A hand held electronic device in accordance with claim 4, wherein the radio frequency identification tag is embedded in an article being worn by the user in the area of a hand of the user intended to be holding the electronic device. 6. A hand held electronic device in accordance with claim 1, wherein the near field communication element located proximate the top of the device, when enabled, is adapted for operating in a card emulation mode for supporting a contactless card payment. 7. A hand held electronic device in accordance with claim 1, wherein the hand held electronic device further comprises a display located along at least a portion of a front side surface of the device, where the front side surface is opposite the back side surface. 8. A hand held electronic device in accordance with claim 7, wherein the display is a touch sensitive display, wherein the touch sensitive display has a display surface that is adapted for presenting visual information to user and detecting a user interaction proximate the display surface. 9. A hand held electronic device comprising:
a first near field communication element located proximate a top of the device, where the first near field communication element is adapted to operate in a card emulation mode for supporting a contactless card payment; a second near field communication element located proximate a back side surface of the device, where the second near field communication element is adapted to operate in a mode different than the card emulation mode for which the first near field communication element is adapted to operate. 10. A hand held electronic device in accordance with claim 9, wherein the second near field communication element is adapted to operate in at least one of a reader mode or a writer mode, which are respectively adapted for reading information from or writing information to a smart tag. 11. A hand held electronic device in accordance with claim 10, wherein the second near field communication element is further adapted to produce a magnetic field proximate the second near field communication element, and wherein the smart tag is powered by electromagnetic induction from the magnetic field produced by the second near field communication element. 12. A hand held electronic device in accordance with claim 9, wherein the second near field communication element is adapted to operate in a peer to peer mode where two near field communication devices including the second near field communication element and another near field communication element, which is not part of the hand held electronic device, can exchange data between each other. 13. A hand held electronic device in accordance with claim 9, further comprising a near field communication controller, which has a separate interface with each of the first near field communication element and the second near field communication element, where the controller can independently enable/disable the first near field communication element and the second near field communication element. 14. A hand held electronic device in accordance with claim 13, wherein the controller relative to the second near field communication element, when active, is adapted for transitioning between different modes each having one of a plurality of different standards, which can include one or more different forms of radio frequency modulation when polling for tags or another communication device as part of a reader/initiator phase. 15. A hand held electronic device in accordance with claim 14, wherein the controller relative to the second near field communication element includes periodic idle phases during polling, where the second near field communication element is deactivated. 16. A hand held electronic device in accordance with claim 13, wherein the second near field communication element is adapted for confirming the identity of an authorized user of the device through an interaction with an identity confirming smart tag, such that when the identity of the user is confirmed to be an authorized user, the first near field communication element is selectively enabled. 17. A hand held electronic device in accordance with claim 9, wherein the first near field communication element includes a single turn loop antenna. 18. A hand held electronic device in accordance with claim 9, wherein the second near field communication element includes a multi-turn loop antenna. 19. A hand held electronic device in accordance with claim 9, wherein the hand held electronic device further comprises a display located along at least a portion of a front side surface of the device, where the front side surface is opposite the back side surface. 20. A method for managing the operation of multiple near field communication elements in a hand held electronic device, the method comprising:
storing identification information for one or more predetermined authorized users; sensing a user interaction in an area proximate the back side surface of the device and receiving as part of the interaction, user identification information corresponding to a particular user presently using the device; confirming an identity of the present user of the device as being an authorized user of the device by a user identification sensor, the user identification sensor being located in a user holding area at a back side surface of the device by determining whether the user identification information corresponding to the particular user matches the stored identification information for one of the predetermined authorized users; upon confirming the identity of the present user of the device as being an authorized user of the device by the user identification sensor, enabling a near field communication element located proximate a top of the device, which is adapted for being selectively enabled, and not enabling the near field communication element located at the top of the device, when the identity of the present user of the device is not confirmed by the user identification sensor as being an authorized user of the device. | The present invention provides a hand held electronic device, which includes a near field communication element located proximate a top of the device, that is adapted for being selectively enabled. The hand held electronic device further includes a user identification sensor, which in at least some instances is a further near field communication element, and is adapted for confirming the identity of an authorized user of the device. The user identification sensor is located in a user holding area at a back side surface of the device. The user identification sensor is adapted for sensing a user interaction in an area proximate the back side surface of the device and receiving as part of the interaction, user identification information corresponding to a particular user presently using the device, and determining whether the user identification information corresponding to the particular user matches identification information for a predetermined authorized user. Upon confirming the identity of an authorized user of the device by the user identification sensor, the near field communication element located proximate the top of the device is enabled. When the identity of an authorized user of the device is not confirmed by the user identification sensor, the near field communication element located at the top of the device is not enabled.1. A hand held electronic device comprising:
a near field communication element located proximate a top of the device, which is adapted for being selectively enabled; a user identification sensor adapted for confirming the identity of an authorized user of the device, the user identification sensor being located in a user holding area at a back side surface of the device, where the user identification sensor is adapted for sensing a user interaction in an area proximate the back side surface of the device and receiving as part of the interaction, user identification information corresponding to a particular user presently using the device, and determining whether the user identification information corresponding to the particular user matches identification information for a predetermined authorized user; wherein upon confirming the identity of an authorized user of the device by the user identification sensor, the near field communication element located proximate the top of the device is enabled, and wherein when the identity of an authorized user of the device is not confirmed by the user identification sensor, the near field communication element located at the top of the device is not enabled. 2. A hand held electronic device in accordance with claim 1, wherein the user identification sensor is a fingerprint sensor adapted for sensing a fingerprint of the particular user presently using the device, and comparing the sensed fingerprint with one or more fingerprints corresponding to one or more predetermined authorized users. 3. A hand held electronic device in accordance with claim 1, wherein the user identification sensor is a second near field communication element. 4. A hand held electronic device in accordance with claim 3, wherein the second near field communication element is adapted for reading a value from a radio frequency identification tag. 5. A hand held electronic device in accordance with claim 4, wherein the radio frequency identification tag is embedded in an article being worn by the user in the area of a hand of the user intended to be holding the electronic device. 6. A hand held electronic device in accordance with claim 1, wherein the near field communication element located proximate the top of the device, when enabled, is adapted for operating in a card emulation mode for supporting a contactless card payment. 7. A hand held electronic device in accordance with claim 1, wherein the hand held electronic device further comprises a display located along at least a portion of a front side surface of the device, where the front side surface is opposite the back side surface. 8. A hand held electronic device in accordance with claim 7, wherein the display is a touch sensitive display, wherein the touch sensitive display has a display surface that is adapted for presenting visual information to user and detecting a user interaction proximate the display surface. 9. A hand held electronic device comprising:
a first near field communication element located proximate a top of the device, where the first near field communication element is adapted to operate in a card emulation mode for supporting a contactless card payment; a second near field communication element located proximate a back side surface of the device, where the second near field communication element is adapted to operate in a mode different than the card emulation mode for which the first near field communication element is adapted to operate. 10. A hand held electronic device in accordance with claim 9, wherein the second near field communication element is adapted to operate in at least one of a reader mode or a writer mode, which are respectively adapted for reading information from or writing information to a smart tag. 11. A hand held electronic device in accordance with claim 10, wherein the second near field communication element is further adapted to produce a magnetic field proximate the second near field communication element, and wherein the smart tag is powered by electromagnetic induction from the magnetic field produced by the second near field communication element. 12. A hand held electronic device in accordance with claim 9, wherein the second near field communication element is adapted to operate in a peer to peer mode where two near field communication devices including the second near field communication element and another near field communication element, which is not part of the hand held electronic device, can exchange data between each other. 13. A hand held electronic device in accordance with claim 9, further comprising a near field communication controller, which has a separate interface with each of the first near field communication element and the second near field communication element, where the controller can independently enable/disable the first near field communication element and the second near field communication element. 14. A hand held electronic device in accordance with claim 13, wherein the controller relative to the second near field communication element, when active, is adapted for transitioning between different modes each having one of a plurality of different standards, which can include one or more different forms of radio frequency modulation when polling for tags or another communication device as part of a reader/initiator phase. 15. A hand held electronic device in accordance with claim 14, wherein the controller relative to the second near field communication element includes periodic idle phases during polling, where the second near field communication element is deactivated. 16. A hand held electronic device in accordance with claim 13, wherein the second near field communication element is adapted for confirming the identity of an authorized user of the device through an interaction with an identity confirming smart tag, such that when the identity of the user is confirmed to be an authorized user, the first near field communication element is selectively enabled. 17. A hand held electronic device in accordance with claim 9, wherein the first near field communication element includes a single turn loop antenna. 18. A hand held electronic device in accordance with claim 9, wherein the second near field communication element includes a multi-turn loop antenna. 19. A hand held electronic device in accordance with claim 9, wherein the hand held electronic device further comprises a display located along at least a portion of a front side surface of the device, where the front side surface is opposite the back side surface. 20. A method for managing the operation of multiple near field communication elements in a hand held electronic device, the method comprising:
storing identification information for one or more predetermined authorized users; sensing a user interaction in an area proximate the back side surface of the device and receiving as part of the interaction, user identification information corresponding to a particular user presently using the device; confirming an identity of the present user of the device as being an authorized user of the device by a user identification sensor, the user identification sensor being located in a user holding area at a back side surface of the device by determining whether the user identification information corresponding to the particular user matches the stored identification information for one of the predetermined authorized users; upon confirming the identity of the present user of the device as being an authorized user of the device by the user identification sensor, enabling a near field communication element located proximate a top of the device, which is adapted for being selectively enabled, and not enabling the near field communication element located at the top of the device, when the identity of the present user of the device is not confirmed by the user identification sensor as being an authorized user of the device. | 2,600 |
9,769 | 9,769 | 15,256,218 | 2,611 | A computer-implemented method, non-transitory medium having machine instructions and/or system having memory and a processor may perform operations including displaying in a first region on a display screen, at least a portion of a map depicting a geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each data feed available in a second display region on the screen; receiving user input specifying data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications (e.g., icons) on the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. | 1. A computer-implemented method comprising:
displaying in a first region on a display screen, at least a portion of a map depicting a geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each specified data feed available in a second display region on the display screen; receiving user input specifying one or more data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications on top of the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. 2. The method of claim 1 wherein the displayed map is zoom-able and translatable to allow different or additional portions of the map to be displayed. 3. The method of claim 1 wherein the data feeds correspond to aspects including natural occurring events or human initiated events. 4. The method of claim 1 wherein one or more data feeds corresponds to aspects relating to available resources. 5. The method of claim 1 wherein one or more data feeds corresponds to aspects relating to a particular geographic region. 6. The method of claim 1 wherein the second display region comprises a tray that is superimposed over the first displayed region. 7. The method of claim 6 wherein receiving user input to make a data feed in the tray active comprises selecting an identifier corresponding to the desired data feed. 8. The method of claim 1 further comprising displaying a plurality of layers of visual indications on the displayed map, each layer of visual indications corresponding to a different type of aspects. 9. The method of claim 1 wherein the displayed visual indications have an appearance that suggests the aspect type to which they respectively correspond. 10. The method of claim 1 further comprising a third display region comprising a plurality of data feeds for selection by the user to make available in the second display region. 11. The method of claim 1 further comprises capturing a snapshot of the map with one or more layers of visual indications displayed thereon, the snapshot corresponding to a particular moment in time. 12. The method of claim 1 wherein the map is displayed as a base map, a terrain map, a satellite map, or any combination thereof. 13. A system comprising:
a memory storing machine instructions; a processor to execute machine instructions stored in the memory, wherein execution of the machine instructions causes the system to perform operations including the following: displaying in a first region on a display screen, at least a portion of a map depicting a geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each specified data feed available in a second display region on the display screen; receiving user input specifying one or more data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications on the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. 14. A non-transitory machine-readable medium comprising machine instructions that, when executed by a processor, cause one or more machines to perform operations comprising:
displaying in a first region on a display screen, at least a portion of a map depicting a user-specified geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each specified data feed available in a second display region on the display screen; receiving user input specifying one or more data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications on the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. | A computer-implemented method, non-transitory medium having machine instructions and/or system having memory and a processor may perform operations including displaying in a first region on a display screen, at least a portion of a map depicting a geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each data feed available in a second display region on the screen; receiving user input specifying data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications (e.g., icons) on the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location.1. A computer-implemented method comprising:
displaying in a first region on a display screen, at least a portion of a map depicting a geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each specified data feed available in a second display region on the display screen; receiving user input specifying one or more data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications on top of the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. 2. The method of claim 1 wherein the displayed map is zoom-able and translatable to allow different or additional portions of the map to be displayed. 3. The method of claim 1 wherein the data feeds correspond to aspects including natural occurring events or human initiated events. 4. The method of claim 1 wherein one or more data feeds corresponds to aspects relating to available resources. 5. The method of claim 1 wherein one or more data feeds corresponds to aspects relating to a particular geographic region. 6. The method of claim 1 wherein the second display region comprises a tray that is superimposed over the first displayed region. 7. The method of claim 6 wherein receiving user input to make a data feed in the tray active comprises selecting an identifier corresponding to the desired data feed. 8. The method of claim 1 further comprising displaying a plurality of layers of visual indications on the displayed map, each layer of visual indications corresponding to a different type of aspects. 9. The method of claim 1 wherein the displayed visual indications have an appearance that suggests the aspect type to which they respectively correspond. 10. The method of claim 1 further comprising a third display region comprising a plurality of data feeds for selection by the user to make available in the second display region. 11. The method of claim 1 further comprises capturing a snapshot of the map with one or more layers of visual indications displayed thereon, the snapshot corresponding to a particular moment in time. 12. The method of claim 1 wherein the map is displayed as a base map, a terrain map, a satellite map, or any combination thereof. 13. A system comprising:
a memory storing machine instructions; a processor to execute machine instructions stored in the memory, wherein execution of the machine instructions causes the system to perform operations including the following: displaying in a first region on a display screen, at least a portion of a map depicting a geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each specified data feed available in a second display region on the display screen; receiving user input specifying one or more data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications on the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. 14. A non-transitory machine-readable medium comprising machine instructions that, when executed by a processor, cause one or more machines to perform operations comprising:
displaying in a first region on a display screen, at least a portion of a map depicting a user-specified geographical area; receiving user input specifying one or more data feeds, each data feed corresponding to a type of aspects, each aspect having an associated geographical location; making each specified data feed available in a second display region on the display screen; receiving user input specifying one or more data feeds available in the second display region to make active; and for each data feed in the second display region made active, displaying a layer of visual indications on the displayed map, wherein each visual indication in the layer corresponds to a different aspect provided by the corresponding data feed, and each visual indication is displayed on the map at its associated geographical location. | 2,600 |
9,770 | 9,770 | 14,345,826 | 2,692 | An image processing apparatus comprises a receiver ( 201 ) for receiving an image signal comprising an encoded image. Another receiver ( 1701 ) receives a data signal from a display ( 107 ) where the data signal comprises a data field that comprises a display dynamic range indication of the display ( 107 ). The display dynamic range indication comprises at least one luminance specification for the display. A dynamic range processor ( 203 ) is arranged to generate an output image by applying a dynamic range transform to the encoded image in response to the display dynamic range indication. An output ( 205 ) outputs an output image signal comprising the output image to the display. The transform may furthermore be performed in response to a target display reference indicative of a dynamic range of display for which the encoded image is encoded. The invention may be used to generate an improved High Dynamic Range (HDR) image from e.g. a Low Dynamic Range (LDR) image, or vice versa. | 1. An image processing apparatus, comprising:
a receiver for receiving an image signal, the image signal comprising at least an encoded image; a second receiver for receiving a data signal from a display, the data signal comprising a data field which comprises a display dynamic range indication of the display, the display dynamic range indication comprising at least a white point luminance, the white point luminance being able to characterize whether the display is a high dynamic range display; a dynamic range processor arranged to generate an output image by applying a dynamic range transform to the encoded image in response to the received display dynamic range indication; and an output for outputting an output image signal comprising the output image to the display. 2. (canceled) 3. The image processing apparatus of claim 1 wherein the display dynamic range indication further comprises a black point luminance. 4. The image processing apparatus of claim 1 wherein the display dynamic range indication comprises mapping data representing a mapping of the display from display input values to a luminance dynamic range of the display. 5. The image processing apparatus of claim 1 wherein the dynamic range indication comprises an Electro Optical Transfer Function for the display. 6. The image processing apparatus of claim 1 wherein the data signal comprises a plurality of luminance dynamic ranges; and wherein the dynamic range processor (203) is arranged to select a luminance dynamic range from the plurality of luminance dynamic ranges, and to perform the dynamic range transform in response to the selected luminance dynamic range. 7. The image processing apparatus of claim 6 wherein the plurality of luminance dynamic ranges relate to different image types. 8. The image processing apparatus of claim 1 further comprising an output of a controller for outputting a control data signal to the display, the control data signal comprising an indication of a luminance dynamic range to be used by the display. 9. The image processing apparatus of claim 8 wherein the control data signal comprises an image processing instruction for the display. 10. The image processing apparatus of claim 9 wherein the image processing instruction comprises a tone mapping indication for the display. 11. The image processing apparatus of claim 1 wherein the display dynamic range indication comprises an ambient light indication, and wherein the dynamic range processor is arranged to adapt the dynamic range transform in response to the ambient light indication. 12. The image processing apparatus of claim 1 wherein the display dynamic range indication is dependent on a user selectable setting of the display. 13. The image processing apparatus of claim 1 wherein the display dynamic range indication comprises dynamic range transform control data; and wherein the dynamic range processor is further arranged to perform the dynamic range transform in response to the dynamic range transform control data. 14. The image processing apparatus of claim 1 wherein the receiver is further arranged to receive a target display reference, the target display reference being indicative of a dynamic range of a target display for which the encoded image is encoded; and wherein
the dynamic range processor is arranged to apply the dynamic range transform to the encoded image in response to the target display reference. 15. The image processing apparatus of claim 1 wherein the dynamic range processor is arranged to select between generating the output image as the encoded image and generating the output image as a transformed image of first encoded image in response to the display dynamic range indication. 16. The image processing apparatus of claim 1 wherein the dynamic range transform comprises a gamut transform. 17. The image processing apparatus of claim 1 further comprising a control data transmitter for transmitting dynamic range control data to a source of the image signal. 18. (canceled) 19. A display comprising:
a receiver for receiving an image signal representing at least one image; a display panel; a display driver for driving the display panel from the image signal; and a transmitter for transmitting a data signal to a source of the image signal, the data signal comprising a data field including a display dynamic range indication for the display, the display dynamic range indication comprising at least a white point luminance, this white point luminance being able to characterize whether the display is a high dynamic range display. 20. The display of claim 19 wherein the display dynamic range indication further comprises a black point luminance. 21. The display of claim 19 wherein the dynamic range indication comprises an Electro Optical Transfer Function of the display. 22. The display of claim 19 wherein the display dynamic range indication comprises an ambient light indication. 23. The display of claim 19 further comprising a second receiver for receiving a control data signal from a source of the image signal, the control data signal comprising an indication of a luminance dynamic range to be used by the display; and wherein the driver is arranged to adapt the driving in response to the luminance dynamic range. 24. An image processing method comprising:
receiving an image signal, the image signal comprising at least an encoded image; receiving a data signal from a display, the data signal comprising a data field which comprises a display dynamic range indication of the display, the display dynamic range indication comprising at least a white point luminance, the white point luminance being able to characterize whether the display is a high dynamic range display; generating an output image by applying a dynamic range transform to the encoded image in response to the display dynamic range indication; and outputting an output image signal comprising the output image to the display. 25.-28. (canceled) | An image processing apparatus comprises a receiver ( 201 ) for receiving an image signal comprising an encoded image. Another receiver ( 1701 ) receives a data signal from a display ( 107 ) where the data signal comprises a data field that comprises a display dynamic range indication of the display ( 107 ). The display dynamic range indication comprises at least one luminance specification for the display. A dynamic range processor ( 203 ) is arranged to generate an output image by applying a dynamic range transform to the encoded image in response to the display dynamic range indication. An output ( 205 ) outputs an output image signal comprising the output image to the display. The transform may furthermore be performed in response to a target display reference indicative of a dynamic range of display for which the encoded image is encoded. The invention may be used to generate an improved High Dynamic Range (HDR) image from e.g. a Low Dynamic Range (LDR) image, or vice versa.1. An image processing apparatus, comprising:
a receiver for receiving an image signal, the image signal comprising at least an encoded image; a second receiver for receiving a data signal from a display, the data signal comprising a data field which comprises a display dynamic range indication of the display, the display dynamic range indication comprising at least a white point luminance, the white point luminance being able to characterize whether the display is a high dynamic range display; a dynamic range processor arranged to generate an output image by applying a dynamic range transform to the encoded image in response to the received display dynamic range indication; and an output for outputting an output image signal comprising the output image to the display. 2. (canceled) 3. The image processing apparatus of claim 1 wherein the display dynamic range indication further comprises a black point luminance. 4. The image processing apparatus of claim 1 wherein the display dynamic range indication comprises mapping data representing a mapping of the display from display input values to a luminance dynamic range of the display. 5. The image processing apparatus of claim 1 wherein the dynamic range indication comprises an Electro Optical Transfer Function for the display. 6. The image processing apparatus of claim 1 wherein the data signal comprises a plurality of luminance dynamic ranges; and wherein the dynamic range processor (203) is arranged to select a luminance dynamic range from the plurality of luminance dynamic ranges, and to perform the dynamic range transform in response to the selected luminance dynamic range. 7. The image processing apparatus of claim 6 wherein the plurality of luminance dynamic ranges relate to different image types. 8. The image processing apparatus of claim 1 further comprising an output of a controller for outputting a control data signal to the display, the control data signal comprising an indication of a luminance dynamic range to be used by the display. 9. The image processing apparatus of claim 8 wherein the control data signal comprises an image processing instruction for the display. 10. The image processing apparatus of claim 9 wherein the image processing instruction comprises a tone mapping indication for the display. 11. The image processing apparatus of claim 1 wherein the display dynamic range indication comprises an ambient light indication, and wherein the dynamic range processor is arranged to adapt the dynamic range transform in response to the ambient light indication. 12. The image processing apparatus of claim 1 wherein the display dynamic range indication is dependent on a user selectable setting of the display. 13. The image processing apparatus of claim 1 wherein the display dynamic range indication comprises dynamic range transform control data; and wherein the dynamic range processor is further arranged to perform the dynamic range transform in response to the dynamic range transform control data. 14. The image processing apparatus of claim 1 wherein the receiver is further arranged to receive a target display reference, the target display reference being indicative of a dynamic range of a target display for which the encoded image is encoded; and wherein
the dynamic range processor is arranged to apply the dynamic range transform to the encoded image in response to the target display reference. 15. The image processing apparatus of claim 1 wherein the dynamic range processor is arranged to select between generating the output image as the encoded image and generating the output image as a transformed image of first encoded image in response to the display dynamic range indication. 16. The image processing apparatus of claim 1 wherein the dynamic range transform comprises a gamut transform. 17. The image processing apparatus of claim 1 further comprising a control data transmitter for transmitting dynamic range control data to a source of the image signal. 18. (canceled) 19. A display comprising:
a receiver for receiving an image signal representing at least one image; a display panel; a display driver for driving the display panel from the image signal; and a transmitter for transmitting a data signal to a source of the image signal, the data signal comprising a data field including a display dynamic range indication for the display, the display dynamic range indication comprising at least a white point luminance, this white point luminance being able to characterize whether the display is a high dynamic range display. 20. The display of claim 19 wherein the display dynamic range indication further comprises a black point luminance. 21. The display of claim 19 wherein the dynamic range indication comprises an Electro Optical Transfer Function of the display. 22. The display of claim 19 wherein the display dynamic range indication comprises an ambient light indication. 23. The display of claim 19 further comprising a second receiver for receiving a control data signal from a source of the image signal, the control data signal comprising an indication of a luminance dynamic range to be used by the display; and wherein the driver is arranged to adapt the driving in response to the luminance dynamic range. 24. An image processing method comprising:
receiving an image signal, the image signal comprising at least an encoded image; receiving a data signal from a display, the data signal comprising a data field which comprises a display dynamic range indication of the display, the display dynamic range indication comprising at least a white point luminance, the white point luminance being able to characterize whether the display is a high dynamic range display; generating an output image by applying a dynamic range transform to the encoded image in response to the display dynamic range indication; and outputting an output image signal comprising the output image to the display. 25.-28. (canceled) | 2,600 |
9,771 | 9,771 | 12,183,390 | 2,647 | A system, method, and computer program product are provided for interfacing a user device to a transaction system. An instruction is received from an SMS gateway in an SMS message, and is parsed to obtain a corresponding transaction. A function on the transaction system for performing the transaction is called, and a response is received from the transaction system. The response is then transmitted to the user device in a response SMS message. | 1. A method for interfacing a user device to a transaction system, the method comprising:
receiving an instruction from an SMS gateway in an SMS message, the SMS message originating from the user device; parsing the instruction to obtain a corresponding transaction; calling a function on the transaction system for performing the transaction; receiving a response from the transaction system; and transmitting the response to the user device in a response SMS message. 2. The method of claim 1, further comprising:
determining whether authentication is needed to perform the transaction. 3. The method of claim 2, wherein the step of determining whether authentication is needed to perform the transaction comprises:
determining an authentication level for the transaction, the authentication level selected from a set of authentication levels comprising:
no authentication,
device authentication,
user authentication, and
re-authentication. 4. The method of claim 2, further comprising:
authenticating the user device if authentication is needed. 5. The method of claim 4, wherein the step of authenticating the user device comprises:
authenticating the user device by comparing a unique property of the user device to a registered value for the unique property. 6. The method of claim 5, wherein the user device is a phone, and further wherein the unique property is a phone number for the phone. 7. The method of claim 2, further comprising:
authenticating the user of the user device if authentication is needed. 8. The method of claim 7, wherein the step of authenticating the user of the user device comprises:
sending a WAP Push message to the user device, the WAP Push message comprising a URL for a user authentication page. 9. The method of claim 7, wherein the step of authenticating the user of the user device comprises:
sending a URL for a user authentication page to the user device. 10. The method of claim 1, wherein the step of parsing the instruction comprises:
locating a command in a command grammar corresponding to the instruction; and generating the corresponding transaction according to the command. 11. An interface between a user device and a transaction system, the interface comprising:
a first receiving module to receive an instruction from an SMS gateway in an SMS message, the SMS message originating from the user device; a parsing module to parse the instruction to obtain a corresponding transaction; a service manager module to call a function on the transaction system for performing the transaction; a second receiving module to receive a response from the transaction system; and a transmitting module to transmit the response to the user device in a response SMS message. 12. The interface of claim 11, further comprising:
a determining module to determine whether authentication is needed to perform the transaction. 13. The interface of claim 12, wherein the determining module is configured to determine an authentication level for the transaction, the authentication level selected from a set of authentication levels comprising:
no authentication, device authentication, user authentication, and re-authentication. 14. The interface of claim 12, further comprising:
an authenticating module to authenticate the user device if authentication is needed. 15. The interface of claim 14, wherein the authenticating module is configured to authenticate the user device by comparing a unique property of the user device to a registered value for the unique property. 16. The interface of claim 15, wherein the user device is a phone, and further wherein the unique property is a phone number for the phone. 17. The interface of claim 12, further comprising:
an authenticating module to authenticate the user of the user device if authentication is needed. 18. The interface of claim 17, wherein the authenticating module is configured to send a WAP Push message to the user device, the WAP Push message comprising a URL for a user authentication page. 19. The interface of claim 17, wherein the authenticating module is configured to send a URL for a user authentication page to the user device. 20. The interface of claim 11, wherein the parsing module is configured to locate a command in a command grammar corresponding to the instruction, and to generate the corresponding transaction according to the command. 21. The interface of claim 11, wherein the parsing module is configured to permit definition of new commands. 22. A computer program product comprising a computer-usable medium having computer program logic recorded thereon for enabling a processor to provide an interface between a user device and a transaction system, the computer program logic comprising:
first receiving means for enabling a processor to receive an instruction from an SMS gateway in an SMS message, the SMS message originating from the user device; parsing means for enabling a processor to parse the instruction to obtain a corresponding transaction; calling means for enabling a processor to call a function on the transaction system for performing the transaction; second receiving means for enabling a processor to receive a response from the transaction system; and transmitting means for enabling a processor to transmit the response to the user device in a response SMS message. | A system, method, and computer program product are provided for interfacing a user device to a transaction system. An instruction is received from an SMS gateway in an SMS message, and is parsed to obtain a corresponding transaction. A function on the transaction system for performing the transaction is called, and a response is received from the transaction system. The response is then transmitted to the user device in a response SMS message.1. A method for interfacing a user device to a transaction system, the method comprising:
receiving an instruction from an SMS gateway in an SMS message, the SMS message originating from the user device; parsing the instruction to obtain a corresponding transaction; calling a function on the transaction system for performing the transaction; receiving a response from the transaction system; and transmitting the response to the user device in a response SMS message. 2. The method of claim 1, further comprising:
determining whether authentication is needed to perform the transaction. 3. The method of claim 2, wherein the step of determining whether authentication is needed to perform the transaction comprises:
determining an authentication level for the transaction, the authentication level selected from a set of authentication levels comprising:
no authentication,
device authentication,
user authentication, and
re-authentication. 4. The method of claim 2, further comprising:
authenticating the user device if authentication is needed. 5. The method of claim 4, wherein the step of authenticating the user device comprises:
authenticating the user device by comparing a unique property of the user device to a registered value for the unique property. 6. The method of claim 5, wherein the user device is a phone, and further wherein the unique property is a phone number for the phone. 7. The method of claim 2, further comprising:
authenticating the user of the user device if authentication is needed. 8. The method of claim 7, wherein the step of authenticating the user of the user device comprises:
sending a WAP Push message to the user device, the WAP Push message comprising a URL for a user authentication page. 9. The method of claim 7, wherein the step of authenticating the user of the user device comprises:
sending a URL for a user authentication page to the user device. 10. The method of claim 1, wherein the step of parsing the instruction comprises:
locating a command in a command grammar corresponding to the instruction; and generating the corresponding transaction according to the command. 11. An interface between a user device and a transaction system, the interface comprising:
a first receiving module to receive an instruction from an SMS gateway in an SMS message, the SMS message originating from the user device; a parsing module to parse the instruction to obtain a corresponding transaction; a service manager module to call a function on the transaction system for performing the transaction; a second receiving module to receive a response from the transaction system; and a transmitting module to transmit the response to the user device in a response SMS message. 12. The interface of claim 11, further comprising:
a determining module to determine whether authentication is needed to perform the transaction. 13. The interface of claim 12, wherein the determining module is configured to determine an authentication level for the transaction, the authentication level selected from a set of authentication levels comprising:
no authentication, device authentication, user authentication, and re-authentication. 14. The interface of claim 12, further comprising:
an authenticating module to authenticate the user device if authentication is needed. 15. The interface of claim 14, wherein the authenticating module is configured to authenticate the user device by comparing a unique property of the user device to a registered value for the unique property. 16. The interface of claim 15, wherein the user device is a phone, and further wherein the unique property is a phone number for the phone. 17. The interface of claim 12, further comprising:
an authenticating module to authenticate the user of the user device if authentication is needed. 18. The interface of claim 17, wherein the authenticating module is configured to send a WAP Push message to the user device, the WAP Push message comprising a URL for a user authentication page. 19. The interface of claim 17, wherein the authenticating module is configured to send a URL for a user authentication page to the user device. 20. The interface of claim 11, wherein the parsing module is configured to locate a command in a command grammar corresponding to the instruction, and to generate the corresponding transaction according to the command. 21. The interface of claim 11, wherein the parsing module is configured to permit definition of new commands. 22. A computer program product comprising a computer-usable medium having computer program logic recorded thereon for enabling a processor to provide an interface between a user device and a transaction system, the computer program logic comprising:
first receiving means for enabling a processor to receive an instruction from an SMS gateway in an SMS message, the SMS message originating from the user device; parsing means for enabling a processor to parse the instruction to obtain a corresponding transaction; calling means for enabling a processor to call a function on the transaction system for performing the transaction; second receiving means for enabling a processor to receive a response from the transaction system; and transmitting means for enabling a processor to transmit the response to the user device in a response SMS message. | 2,600 |
9,772 | 9,772 | 14,996,506 | 2,689 | A tire parameter monitoring system comprising at least two RF repeaters, wherein each of the at least two RF repeaters is dedicated to an individual sensor unit of at least two sensor units. | 1. A tire parameter monitoring system, comprising:
at least two RF repeaters; wherein each of the at least two RF repeaters is dedicated to an individual sensor unit of at least two sensor units. 2. The tire parameter monitoring system according to claim 1, wherein each RF repeater is configured to receive sensor signal/data transmitted via RF by the corresponding sensor unit and retransmit the sensor signal/data in form of an RF repeater signal based on the corresponding sensor signal/data to a central unit of the tire parameter monitoring system. 3. The tire parameter monitoring system according to claim 2, wherein each RF repeater is configured to retransmit the corresponding sensor signal/data in form of the RF repeater signal in response to receiving an RF control signal from the central unit, the RF control signal comprising an information describing a request for retransmission. 4. The tire parameter monitoring system according to claim 2, wherein the sensor signal/data comprises an information describing a parameter of the wheel the sensor unit is attached to. 5. The tire parameter monitoring system according to claim 4, wherein the parameter of the wheel is at least one out of pressure, temperature, acceleration, battery voltage and sensor unit identification of a tire of the wheel. 6. The tire parameter monitoring system according to claim 5, wherein the RF repeater signal comprises the information describing the parameter of the wheel the corresponding sensor unit is attached to and an identification of the RF repeater. 7. The tire parameter monitoring system according to claim 6, wherein the central unit is configured to receive the RF repeater signal and to allocate the information describing the parameter of the wheel to a position of the wheel at the vehicle using the identification of the RF repeater. 8. The tire parameter monitoring system according to claim 7, wherein the sensor signal/data further comprises an identification of the sensor unit, and wherein the central unit is configured to allocate the information describing the parameter of the wheel to a position of the wheel at the vehicle by matching the identification of the sensor unit and the identification of the RF repeater with a known identification of the RF repeater. 9. The tire parameter monitoring system according to claim 8, wherein a position of the RF repeater is known to the central unit. 10. The tire parameter monitoring system according to claim 4, wherein each of the at least two RF repeaters is configured to store the parameter of the wheel received from the corresponding sensor unit and to transmit the RF repeater signal comprising the stored parameter of the wheel in response to receiving an RF control signal from the central unit, the RF control signal indicating a parameter update request. 11. The tire parameter monitoring system according to claim 10, wherein the central unit comprises an RF transceiver configured to transmit the RF control signal and to receive the RF repeater signal. 12. The tire parameter monitoring system according to claim 2, wherein each RF repeater is configured to only retransmit the sensor signal/data of the sensor unit the RF repeater is dedicated to. 13. The tire parameter monitoring system according to claim 12, wherein each RF repeater is configured to detect the sensor signal of the sensor unit the RF repeater is dedicated to based on a received signal strength. 14. The tire parameter monitoring system according claim 1, wherein a distance between one of the at least two wheel units and the corresponding RF repeater is smaller than a distance between the one of the at least two wheel units and the other RF repeater. 15. The tire parameter monitoring system according to claim 1, wherein each of the RF repeaters comprises a wireless transceiver. 16. The tire parameter monitoring system according to claim 1, wherein the at least two sensor units are attached to different wheels of a vehicle. 17. The tire parameter monitoring system according to claim 1, wherein a number of RF repeaters is equal to a number of sensor units. 18. The tire parameter monitoring system according to claim 1, wherein each of the at least two sensor units comprises an RF transmitter. 19. A method for updating a position of a wheel at a vehicle having a tire parameter monitoring system, the tire parameter monitoring system comprising a central unit and at least two sensor units attached to different wheels of the vehicle, wherein each of the at least two sensor units has a dedicated RF repeater, the method comprising:
transmitting an RF repeater signal with each of the RF repeaters to the central unit, wherein the RF repeater signal comprises an identification of the RF repeater and an identification of the sensor unit the RF repeater is dedicated to; and receiving with the central unit the RF repeater signal of each of the RF repeaters and matching the identification of the sensor unit and the identification of the RF repeater with a known identification of the RF repeater. 20. A method for monitoring parameters of tires of a vehicle, the method comprising:
transmitting a first sensor signal/data with a first sensor unit attached to a first wheel of the vehicle and transmitting a second sensor signal/data with a second sensor unit attached to a second wheel of the vehicle; and retransmitting the first sensor signal/data with a first RF repeater which is dedicated to the first sensor unit and retransmitting the second sensor signal/data with a second RF repeater which is dedicated to the second sensor unit. 21. A computer program comprising instructions stored on a non-transitory storage medium, when such instructions are executed by a processor perform a method for updating a position of a wheel at a vehicle having a tire parameter monitoring system, the tire parameter monitoring system comprising a central unit and at least two sensor units attached to different wheels of the vehicle, wherein each of the at least two sensor units has a dedicated RF repeater, the method comprising:
transmitting an RF repeater signal with each of the RF repeaters to the central unit, wherein the RF repeater signal comprises an identification of the RF repeater and an identification of the sensor unit the RF repeater is dedicated to; and receiving with the central unit the RF repeater signal of each of the RF repeaters and matching the identification of the sensor unit and the identification of the RF repeater with a known identification of the RF repeater. 22. A computer program comprising instructions stored on a non-transitory storage medium, when such instructions are executed by a processor perform a method for monitoring parameters of tires of a vehicle, the method comprising:
transmitting a first sensor signal/data with a first sensor unit attached to a first wheel of the vehicle and transmitting a second sensor signal/data with a second sensor unit attached to a second wheel of the vehicle; and retransmitting the first sensor signal/data with a first RF repeater which is dedicated to the first sensor unit and retransmitting the second sensor signal/data with a second RF repeater which is dedicated to the second sensor unit. | A tire parameter monitoring system comprising at least two RF repeaters, wherein each of the at least two RF repeaters is dedicated to an individual sensor unit of at least two sensor units.1. A tire parameter monitoring system, comprising:
at least two RF repeaters; wherein each of the at least two RF repeaters is dedicated to an individual sensor unit of at least two sensor units. 2. The tire parameter monitoring system according to claim 1, wherein each RF repeater is configured to receive sensor signal/data transmitted via RF by the corresponding sensor unit and retransmit the sensor signal/data in form of an RF repeater signal based on the corresponding sensor signal/data to a central unit of the tire parameter monitoring system. 3. The tire parameter monitoring system according to claim 2, wherein each RF repeater is configured to retransmit the corresponding sensor signal/data in form of the RF repeater signal in response to receiving an RF control signal from the central unit, the RF control signal comprising an information describing a request for retransmission. 4. The tire parameter monitoring system according to claim 2, wherein the sensor signal/data comprises an information describing a parameter of the wheel the sensor unit is attached to. 5. The tire parameter monitoring system according to claim 4, wherein the parameter of the wheel is at least one out of pressure, temperature, acceleration, battery voltage and sensor unit identification of a tire of the wheel. 6. The tire parameter monitoring system according to claim 5, wherein the RF repeater signal comprises the information describing the parameter of the wheel the corresponding sensor unit is attached to and an identification of the RF repeater. 7. The tire parameter monitoring system according to claim 6, wherein the central unit is configured to receive the RF repeater signal and to allocate the information describing the parameter of the wheel to a position of the wheel at the vehicle using the identification of the RF repeater. 8. The tire parameter monitoring system according to claim 7, wherein the sensor signal/data further comprises an identification of the sensor unit, and wherein the central unit is configured to allocate the information describing the parameter of the wheel to a position of the wheel at the vehicle by matching the identification of the sensor unit and the identification of the RF repeater with a known identification of the RF repeater. 9. The tire parameter monitoring system according to claim 8, wherein a position of the RF repeater is known to the central unit. 10. The tire parameter monitoring system according to claim 4, wherein each of the at least two RF repeaters is configured to store the parameter of the wheel received from the corresponding sensor unit and to transmit the RF repeater signal comprising the stored parameter of the wheel in response to receiving an RF control signal from the central unit, the RF control signal indicating a parameter update request. 11. The tire parameter monitoring system according to claim 10, wherein the central unit comprises an RF transceiver configured to transmit the RF control signal and to receive the RF repeater signal. 12. The tire parameter monitoring system according to claim 2, wherein each RF repeater is configured to only retransmit the sensor signal/data of the sensor unit the RF repeater is dedicated to. 13. The tire parameter monitoring system according to claim 12, wherein each RF repeater is configured to detect the sensor signal of the sensor unit the RF repeater is dedicated to based on a received signal strength. 14. The tire parameter monitoring system according claim 1, wherein a distance between one of the at least two wheel units and the corresponding RF repeater is smaller than a distance between the one of the at least two wheel units and the other RF repeater. 15. The tire parameter monitoring system according to claim 1, wherein each of the RF repeaters comprises a wireless transceiver. 16. The tire parameter monitoring system according to claim 1, wherein the at least two sensor units are attached to different wheels of a vehicle. 17. The tire parameter monitoring system according to claim 1, wherein a number of RF repeaters is equal to a number of sensor units. 18. The tire parameter monitoring system according to claim 1, wherein each of the at least two sensor units comprises an RF transmitter. 19. A method for updating a position of a wheel at a vehicle having a tire parameter monitoring system, the tire parameter monitoring system comprising a central unit and at least two sensor units attached to different wheels of the vehicle, wherein each of the at least two sensor units has a dedicated RF repeater, the method comprising:
transmitting an RF repeater signal with each of the RF repeaters to the central unit, wherein the RF repeater signal comprises an identification of the RF repeater and an identification of the sensor unit the RF repeater is dedicated to; and receiving with the central unit the RF repeater signal of each of the RF repeaters and matching the identification of the sensor unit and the identification of the RF repeater with a known identification of the RF repeater. 20. A method for monitoring parameters of tires of a vehicle, the method comprising:
transmitting a first sensor signal/data with a first sensor unit attached to a first wheel of the vehicle and transmitting a second sensor signal/data with a second sensor unit attached to a second wheel of the vehicle; and retransmitting the first sensor signal/data with a first RF repeater which is dedicated to the first sensor unit and retransmitting the second sensor signal/data with a second RF repeater which is dedicated to the second sensor unit. 21. A computer program comprising instructions stored on a non-transitory storage medium, when such instructions are executed by a processor perform a method for updating a position of a wheel at a vehicle having a tire parameter monitoring system, the tire parameter monitoring system comprising a central unit and at least two sensor units attached to different wheels of the vehicle, wherein each of the at least two sensor units has a dedicated RF repeater, the method comprising:
transmitting an RF repeater signal with each of the RF repeaters to the central unit, wherein the RF repeater signal comprises an identification of the RF repeater and an identification of the sensor unit the RF repeater is dedicated to; and receiving with the central unit the RF repeater signal of each of the RF repeaters and matching the identification of the sensor unit and the identification of the RF repeater with a known identification of the RF repeater. 22. A computer program comprising instructions stored on a non-transitory storage medium, when such instructions are executed by a processor perform a method for monitoring parameters of tires of a vehicle, the method comprising:
transmitting a first sensor signal/data with a first sensor unit attached to a first wheel of the vehicle and transmitting a second sensor signal/data with a second sensor unit attached to a second wheel of the vehicle; and retransmitting the first sensor signal/data with a first RF repeater which is dedicated to the first sensor unit and retransmitting the second sensor signal/data with a second RF repeater which is dedicated to the second sensor unit. | 2,600 |
9,773 | 9,773 | 14,293,502 | 2,651 | Smart sensors comprising one or more microelectromechanical systems (MEMS) sensors and a digital signal processor (DSP) in a sensor package are described. An exemplary smart sensor can comprise a MEMS acoustic sensor or microphone and a DSP housed in a package or enclosure comprising a substrate and a lid and a package substrate that defines a back cavity for the MEMS acoustic sensor or microphone. Provided implementations can also comprise a MEMS motion sensor housed in the package or enclosure. Embodiments of the subject disclosure can provide improved power management and battery life from a single charge by intelligently responding to trigger events or wake events while also providing an always on sensor that persistently detects the trigger events or wake events. In addition, various physical configurations of smart sensors and MEMS sensor or microphone packages are described. | 1. A sensor, comprising:
a microelectromechanical systems (MEMS) acoustic sensor associated with a back cavity; a digital signal processor (DSP) located in the back cavity and configured to generate a control signal for a system processor in response to receiving a signal from the MEMS acoustic sensor; and a package comprising a lid and a package substrate, wherein the package has a port adapted to receive acoustic waves, and wherein the package houses the MEMS acoustic sensor and defines the back cavity associated with the MEMS acoustic sensor. 2. The sensor of claim 1, wherein the DSP is configured to generate a wake-up signal in response to processing the signal from the MEMS acoustic sensor. 3. The sensor of claim 1, wherein DSP comprises an application specific integrated circuit (ASIC). 4. The sensor of claim 1, wherein the DSP comprises a wake-up module configured to wake up the system processor. 5. The sensor of claim 4, further comprising:
a device comprising the system processor and the sensor, wherein the system processor is located outside the package. 6. The sensor of claim 5, wherein the system processor includes an integrated circuit (IC) for controlling functionality of a mobile phone. 7. The sensor of claim 1, wherein the DSP further comprises a sensor control module configured to control the MEMS acoustic sensor. 8. The sensor of claim 1, further comprising:
a MEMS motion sensor. 9. The sensor of claim 8, wherein the DSP is configured to generate the control signal in response to receiving at least one of a signal from the MEMS motion sensor or the signal from the MEMS acoustic sensor. 10. The sensor of claim 8, wherein the DSP is configured to control the MEMS motion sensor. 11. The sensor of claim 8, wherein the DSP is configured to at least one of calibrate, adjust performance of, or change operating mode of at least one of the MEMS acoustic sensor or the MEMS motion sensor. 12. The sensor of claim 1, wherein the sensor is configured to operate at a voltage below 1.5 volts. 13. The sensor of claim 1, wherein the sensor is configured to operate in an always-on mode. 14. A microphone package, comprising:
a microelectromechanical systems (MEMS) microphone associated with a back cavity; a digital signal processor (DSP) located in the back cavity configured to control a device external to the microphone package; and the microphone package comprising a lid and a package substrate, wherein the microphone package has a port adapted to receive acoustic pressure, and wherein the microphone package defines the back cavity. 15. The microphone package of claim 14, further comprising:
a MEMS motion sensor. 16. The microphone package of claim 15, wherein the DSP is configured to at least one of calibrate, adjust performance of, or change operating mode of at least one of the MEMS acoustic sensor or the MEMS motion sensor. 17. The microphone package of claim 16, wherein the DSP is configured to control the device in response to receiving least one of a signal from the MEMS motion sensor or a signal from the MEMS microphone. 18. A method comprising:
receiving acoustic pressure at microelectromechanical systems (MEMS) acoustic sensor enclosed in a sensor package comprising a lid and a package substrate via a port in the sensor package that is adapted to receive the acoustic pressure; transmitting a signal from the MEMS acoustic sensor to a digital signal processor (DSP) enclosed within a back cavity of the MEMS acoustic sensor; and generating a control signal by using the DSP, wherein the control signal is adapted to facilitate controlling a device external to the sensor package. 19. The method of claim 18, further comprising:
transmitting the control signal from the DSP to the device. 20. The method of claim 18, wherein the generating the control signal by using the DSP comprises generating a wake-up signal adapted to facilitate powering up the device from a low-power state. 21. The method of claim 18, where in the generating the control signal is based on the signal from the MEMS acoustic sensor. 22. The method of claim 18, further comprising:
transmitting a signal from a MEMS motion sensor enclosed within the sensor package to the DSP. 23. The method of claim 22, wherein the generating the control signal is based on at least one of the signal from the MEMS motion sensor or the signal from the MEMS acoustic sensor. 24. The method of claim 21, further comprising:
at least one of calibrating, adjusting performance of, or changing operating mode of at least one of the MEMS acoustic sensor or the MEMS motion sensor by using the DSP. | Smart sensors comprising one or more microelectromechanical systems (MEMS) sensors and a digital signal processor (DSP) in a sensor package are described. An exemplary smart sensor can comprise a MEMS acoustic sensor or microphone and a DSP housed in a package or enclosure comprising a substrate and a lid and a package substrate that defines a back cavity for the MEMS acoustic sensor or microphone. Provided implementations can also comprise a MEMS motion sensor housed in the package or enclosure. Embodiments of the subject disclosure can provide improved power management and battery life from a single charge by intelligently responding to trigger events or wake events while also providing an always on sensor that persistently detects the trigger events or wake events. In addition, various physical configurations of smart sensors and MEMS sensor or microphone packages are described.1. A sensor, comprising:
a microelectromechanical systems (MEMS) acoustic sensor associated with a back cavity; a digital signal processor (DSP) located in the back cavity and configured to generate a control signal for a system processor in response to receiving a signal from the MEMS acoustic sensor; and a package comprising a lid and a package substrate, wherein the package has a port adapted to receive acoustic waves, and wherein the package houses the MEMS acoustic sensor and defines the back cavity associated with the MEMS acoustic sensor. 2. The sensor of claim 1, wherein the DSP is configured to generate a wake-up signal in response to processing the signal from the MEMS acoustic sensor. 3. The sensor of claim 1, wherein DSP comprises an application specific integrated circuit (ASIC). 4. The sensor of claim 1, wherein the DSP comprises a wake-up module configured to wake up the system processor. 5. The sensor of claim 4, further comprising:
a device comprising the system processor and the sensor, wherein the system processor is located outside the package. 6. The sensor of claim 5, wherein the system processor includes an integrated circuit (IC) for controlling functionality of a mobile phone. 7. The sensor of claim 1, wherein the DSP further comprises a sensor control module configured to control the MEMS acoustic sensor. 8. The sensor of claim 1, further comprising:
a MEMS motion sensor. 9. The sensor of claim 8, wherein the DSP is configured to generate the control signal in response to receiving at least one of a signal from the MEMS motion sensor or the signal from the MEMS acoustic sensor. 10. The sensor of claim 8, wherein the DSP is configured to control the MEMS motion sensor. 11. The sensor of claim 8, wherein the DSP is configured to at least one of calibrate, adjust performance of, or change operating mode of at least one of the MEMS acoustic sensor or the MEMS motion sensor. 12. The sensor of claim 1, wherein the sensor is configured to operate at a voltage below 1.5 volts. 13. The sensor of claim 1, wherein the sensor is configured to operate in an always-on mode. 14. A microphone package, comprising:
a microelectromechanical systems (MEMS) microphone associated with a back cavity; a digital signal processor (DSP) located in the back cavity configured to control a device external to the microphone package; and the microphone package comprising a lid and a package substrate, wherein the microphone package has a port adapted to receive acoustic pressure, and wherein the microphone package defines the back cavity. 15. The microphone package of claim 14, further comprising:
a MEMS motion sensor. 16. The microphone package of claim 15, wherein the DSP is configured to at least one of calibrate, adjust performance of, or change operating mode of at least one of the MEMS acoustic sensor or the MEMS motion sensor. 17. The microphone package of claim 16, wherein the DSP is configured to control the device in response to receiving least one of a signal from the MEMS motion sensor or a signal from the MEMS microphone. 18. A method comprising:
receiving acoustic pressure at microelectromechanical systems (MEMS) acoustic sensor enclosed in a sensor package comprising a lid and a package substrate via a port in the sensor package that is adapted to receive the acoustic pressure; transmitting a signal from the MEMS acoustic sensor to a digital signal processor (DSP) enclosed within a back cavity of the MEMS acoustic sensor; and generating a control signal by using the DSP, wherein the control signal is adapted to facilitate controlling a device external to the sensor package. 19. The method of claim 18, further comprising:
transmitting the control signal from the DSP to the device. 20. The method of claim 18, wherein the generating the control signal by using the DSP comprises generating a wake-up signal adapted to facilitate powering up the device from a low-power state. 21. The method of claim 18, where in the generating the control signal is based on the signal from the MEMS acoustic sensor. 22. The method of claim 18, further comprising:
transmitting a signal from a MEMS motion sensor enclosed within the sensor package to the DSP. 23. The method of claim 22, wherein the generating the control signal is based on at least one of the signal from the MEMS motion sensor or the signal from the MEMS acoustic sensor. 24. The method of claim 21, further comprising:
at least one of calibrating, adjusting performance of, or changing operating mode of at least one of the MEMS acoustic sensor or the MEMS motion sensor by using the DSP. | 2,600 |
9,774 | 9,774 | 14,629,936 | 2,612 | According to the invention, there is provided a computer-implemented method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of: providing ( 300 ) a first curve; duplicating ( 301 ) the first curve to obtain a second curve; determining ( 302 ) a set of at least one starting point belonging to the first curve; determining ( 303 ) a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking ( 304 ) the relevant points with their associated target points by using at least a connecting curve. | 1. A computer-implemented method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of:
providing a first curve; duplicating the first curve to obtain a second curve; determining a set of at least one starting point belonging to the first curve; determining a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking the relevant points with their associated target points by using at least a connecting curve; the method further comprising a step wherein at least a transformation is applied to the second curve, this transformation being chosen among axis rotation and stretching. 2. The method according to claim 1 wherein a combination of at least a first and a second transformation is applied to the second curve, the first transformation being chosen among axis rotation and stretching and the second transformation being chosen among normal translation, planar translation, axis rotation, stretching or uniform scaling. 3. The method according to claim 1 wherein said transformation is applied to the second curve before said step of linking relevant points with their associated target points by using at least a connecting curve. 4. The method according to claim 1 wherein said second curve is obtained outside a plane containing said first curve. 5. The method according to claim 1 wherein, the starting points are distributed uniformly along the initial curve. 6. The method according to claim 1 wherein, the starting points are determined by identifying the local extrema of the initial curve. 7. The method according to claim 1, wherein a point of the first curve which is an inflexion point is considered as a starting point. 8. The method according to claim 1 wherein the first curve is of geometrical type. 9. The method according to claim 8, wherein a point belonging to the first curve is identified as a starting point each time the first derivative in respect one of the dimensions of a local reference associated to said first curve becomes zero and changes sign. 10. The method according to claim 1, wherein the first curve is defined by at least one stroke. 11. The method according to claim 10, comprising the steps of:
individually discretizing each stroke of the curve to obtain a first set of points; applying a distance filter on the first set of point to obtain a second set of points; applying on the second set of points the Douglas Peucker Ramer algorithm with a predefined threshold TH_DPR in order to identify the local extrema of the first curve. 12. The method according to claim 11, wherein TH_DPR is chosen equal to one quarter of the length of the longest side of the bounding box surrounding the first curve. 13. The method according to claim 1, wherein the ends of the first curve are identified as starting points. 14. The method according to claim 1, wherein connecting curves are simple segments, Bezier curves or NURBS curves. 15. The method according to claim 1, wherein a connecting curve is a bi-dimensional curve associated to a plane which is by default orthogonal to the tangent of the first curve at the starting point and to the tangent of the second curve at the target point. 16. A computer program product, stored on a non-transitory computer readable medium comprising code for causing a computer to implement a method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of:
providing a first curve; duplicating the first curve to obtain a second curve; determining a set of at least one starting point belonging to the first curve; determining a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking the relevant points with their associated target points by using at least a connecting curve; the method further comprising a step wherein at least a transformation is applied to the second curve, this transformation being chosen among axis rotation and stretching. 17. An electronic device comprising:
at least one central processing unit a screen a non-transitory memory at least one module stored in the memory and configured for execution by the at least one central processing unit, the at least one module including instructions: to provide a first curve; to duplicate the first curve to obtain a second curve; to determine a set of at least one starting point belonging to the first curve; to determine a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; to link the relevant points with their associated target points by using at least a connecting curve. | According to the invention, there is provided a computer-implemented method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of: providing ( 300 ) a first curve; duplicating ( 301 ) the first curve to obtain a second curve; determining ( 302 ) a set of at least one starting point belonging to the first curve; determining ( 303 ) a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking ( 304 ) the relevant points with their associated target points by using at least a connecting curve.1. A computer-implemented method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of:
providing a first curve; duplicating the first curve to obtain a second curve; determining a set of at least one starting point belonging to the first curve; determining a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking the relevant points with their associated target points by using at least a connecting curve; the method further comprising a step wherein at least a transformation is applied to the second curve, this transformation being chosen among axis rotation and stretching. 2. The method according to claim 1 wherein a combination of at least a first and a second transformation is applied to the second curve, the first transformation being chosen among axis rotation and stretching and the second transformation being chosen among normal translation, planar translation, axis rotation, stretching or uniform scaling. 3. The method according to claim 1 wherein said transformation is applied to the second curve before said step of linking relevant points with their associated target points by using at least a connecting curve. 4. The method according to claim 1 wherein said second curve is obtained outside a plane containing said first curve. 5. The method according to claim 1 wherein, the starting points are distributed uniformly along the initial curve. 6. The method according to claim 1 wherein, the starting points are determined by identifying the local extrema of the initial curve. 7. The method according to claim 1, wherein a point of the first curve which is an inflexion point is considered as a starting point. 8. The method according to claim 1 wherein the first curve is of geometrical type. 9. The method according to claim 8, wherein a point belonging to the first curve is identified as a starting point each time the first derivative in respect one of the dimensions of a local reference associated to said first curve becomes zero and changes sign. 10. The method according to claim 1, wherein the first curve is defined by at least one stroke. 11. The method according to claim 10, comprising the steps of:
individually discretizing each stroke of the curve to obtain a first set of points; applying a distance filter on the first set of point to obtain a second set of points; applying on the second set of points the Douglas Peucker Ramer algorithm with a predefined threshold TH_DPR in order to identify the local extrema of the first curve. 12. The method according to claim 11, wherein TH_DPR is chosen equal to one quarter of the length of the longest side of the bounding box surrounding the first curve. 13. The method according to claim 1, wherein the ends of the first curve are identified as starting points. 14. The method according to claim 1, wherein connecting curves are simple segments, Bezier curves or NURBS curves. 15. The method according to claim 1, wherein a connecting curve is a bi-dimensional curve associated to a plane which is by default orthogonal to the tangent of the first curve at the starting point and to the tangent of the second curve at the target point. 16. A computer program product, stored on a non-transitory computer readable medium comprising code for causing a computer to implement a method for designing a three dimensional modeled object in a three dimensional scene, wherein the method comprises the steps of:
providing a first curve; duplicating the first curve to obtain a second curve; determining a set of at least one starting point belonging to the first curve; determining a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; linking the relevant points with their associated target points by using at least a connecting curve; the method further comprising a step wherein at least a transformation is applied to the second curve, this transformation being chosen among axis rotation and stretching. 17. An electronic device comprising:
at least one central processing unit a screen a non-transitory memory at least one module stored in the memory and configured for execution by the at least one central processing unit, the at least one module including instructions: to provide a first curve; to duplicate the first curve to obtain a second curve; to determine a set of at least one starting point belonging to the first curve; to determine a set of at least one target point belonging to the second curve, each target point being associated at least one starting point; to link the relevant points with their associated target points by using at least a connecting curve. | 2,600 |
9,775 | 9,775 | 15,255,008 | 2,664 | A system comprises a processor configured to execute instructions to receive an indication of an occurrence of a human gesture and to perform an analysis of the indication of the occurrence of the human gesture to determine contextual criteria having a relationship to the occurrence of the human gesture. The processor may determine a meaning of the human gesture based at least in part on the contextual criteria and a plurality of possible intended meanings for the human gesture. The processor also may execute an instruction respective to determining the meaning of the human gesture, wherein at least a portion of the instruction is dependent upon the meaning of the human gesture. | 1. A computer program product for determination of a contextual meaning of human gestures, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:
receive a visual image asset comprising a plurality of pixels that collectively depict a content of the visual image asset; analyze the plurality of pixels according to a first digital image analysis protocol to determine at least one indicia of a context associated with the content of the visual image asset; analyze the plurality of pixels according to a second digital image analysis protocol to determine a gesture indicated by the content of the visual image asset; perform a semantic mapping to determine the contextual meaning of the gesture based on the at least one indicia of the context and the determined gesture; and execute an operation responsive to determining the contextual meaning of the gesture, wherein at least one function of the operation is dependent upon the contextual meaning of the gesture. 2. The computer program product of claim 1, wherein at least one of the first digital image analysis and the second digital image analysis is selected from among a group consisting of facial recognition, clothing recognition, location identification, temporal identification, and role identification. 3. The computer program product of claim 1, wherein the at least one indicia of the context associated with the content of the visual image asset indicates a contextual characteristic selected from among a group consisting of culture, ethnicity, religion, country, region, location, setting, role, and time. 4. The computer program product of claim 1, wherein the gesture is performed by a non-human entity. 5. The computer program product of claim 1, wherein executing the program instructions further causes the processor to determine a geographic area associated with the at least one indicia of the context. 6. The computer program product of claim 5, wherein performing the semantic mapping to determine the contextual meaning of the gesture comprises:
determining one or more cultural and societal customs corresponding to the geographic area associated with the at least one indicia of the context; determining one or more possible meanings of the gesture; mapping the one or more cultural and societal customs to the one or more possible meanings of the gesture to determine a probability of accuracy for each of the one or more possible meanings of the gesture according to the geographic area associated with the at least one indicia of the context indicated; and determining a possible meaning of the gesture having a highest determined probability of accuracy and selected from among the one or more possible meanings of the gestures as the contextual meaning of the gesture. 7. The computer program product of claim 1, wherein the computer program product is executed in a cloud environment as a software as a service. 8. A computer-implemented method, comprising:
receiving a digital media asset comprising a plurality of pixels; performing a first image analysis of the digital media asset at a pixel-based level to determine a gesture embodied within the digital media asset according to a first relationship among the plurality of pixels of the digital media asset; performing a second image analysis of the digital media asset at the pixel-based level to determine a context of the gesture according to a second relationship among the plurality of pixels of the digital media asset; and determining a cultural meaning of the gesture based on the determined gesture and the context of the gesture. 9. The computer-implemented method of claim 8, wherein determining the cultural meaning of the gesture comprises:
determining an area associated with the context of the gesture; determining a plurality of potential meanings of the determined gesture; and determining the cultural meaning of the gesture based on the plurality of potential meanings of the determined gesture and the area associated with the context of the gesture. 10. The computer-implemented method of claim 9, wherein determining the cultural meaning of the gesture based on the plurality of potential meanings of the determined gesture and the area associated with the context of the gesture comprises:
determining a score for each of a plurality of context metadata values based on the content of the digital media asset; mapping each of the plurality of context metadata values to each of the plurality of potential meanings of the determined gesture to associate an accumulated score with each of the plurality of potential meanings of the determined gesture; and selecting the cultural meaning of the gesture as one of the plurality of potential meanings of the determined gesture from among the plurality of potential meanings of the determined gesture based at least in part on a determination that the accumulated score of the one of the plurality of potential meanings of the determined gesture exceeds a threshold. 11. The computer-implemented method of claim 8, wherein the second image analysis is selected from a group consisting of facial recognition, clothing recognition, location identification, temporal identification, and role identification. 12. The computer-implemented method of claim 11, wherein the second image analysis determines one or more facial recognition context metadata values selected from a group consisting of race, gender, and ethnicity. 13. The computer-implemented method of claim 11, wherein the second image analysis determines one or more clothing recognition context metadata values selected from a group consisting of culture, religion, and country. 14. The computer-implemented method of claim 11, wherein the second image analysis determines one or more location identification context metadata values selected from a group consisting of region, country, location, and setting. 15. The computer-implemented method of claim 8, wherein the computer-implemented method is implemented in a cloud environment as a software as a service. 16. A system, comprising a processor configured to:
receive an indication of an occurrence of a human gesture; execute one or more instructions to perform an analysis of the indication of the occurrence of the human gesture to determine contextual criteria having a relationship to the occurrence of the human gesture; determine a meaning of the human gesture based at least in part on the contextual criteria and a plurality of possible intended meanings for the human gesture; and execute an instruction responsive to determining the meaning of the human gesture, wherein at least a portion of the instruction is dependent upon the meaning of the human gesture. 17. The system of claim 16, wherein the indication of the occurrence of the human gesture is a visual image asset, and wherein the contextual criteria is determined based on surroundings of the human gesture in the visual image asset. 18. The system of claim 16, wherein the plurality of possible intended meanings for the human gesture are filtered according to the contextual criteria to determine the meaning of the human gesture. 19. The system of claim 16, wherein the contextual criteria is determined according to a multistep context metadata determination system comprising facial recognition, clothing recognition, location identification, temporal identification, and role identification. 20. The system of claim 16, wherein determining the meaning of the human gesture comprises performing a semantic mapping of the human gesture to the possible intended meanings for the human gesture according to the contextual criteria. | A system comprises a processor configured to execute instructions to receive an indication of an occurrence of a human gesture and to perform an analysis of the indication of the occurrence of the human gesture to determine contextual criteria having a relationship to the occurrence of the human gesture. The processor may determine a meaning of the human gesture based at least in part on the contextual criteria and a plurality of possible intended meanings for the human gesture. The processor also may execute an instruction respective to determining the meaning of the human gesture, wherein at least a portion of the instruction is dependent upon the meaning of the human gesture.1. A computer program product for determination of a contextual meaning of human gestures, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:
receive a visual image asset comprising a plurality of pixels that collectively depict a content of the visual image asset; analyze the plurality of pixels according to a first digital image analysis protocol to determine at least one indicia of a context associated with the content of the visual image asset; analyze the plurality of pixels according to a second digital image analysis protocol to determine a gesture indicated by the content of the visual image asset; perform a semantic mapping to determine the contextual meaning of the gesture based on the at least one indicia of the context and the determined gesture; and execute an operation responsive to determining the contextual meaning of the gesture, wherein at least one function of the operation is dependent upon the contextual meaning of the gesture. 2. The computer program product of claim 1, wherein at least one of the first digital image analysis and the second digital image analysis is selected from among a group consisting of facial recognition, clothing recognition, location identification, temporal identification, and role identification. 3. The computer program product of claim 1, wherein the at least one indicia of the context associated with the content of the visual image asset indicates a contextual characteristic selected from among a group consisting of culture, ethnicity, religion, country, region, location, setting, role, and time. 4. The computer program product of claim 1, wherein the gesture is performed by a non-human entity. 5. The computer program product of claim 1, wherein executing the program instructions further causes the processor to determine a geographic area associated with the at least one indicia of the context. 6. The computer program product of claim 5, wherein performing the semantic mapping to determine the contextual meaning of the gesture comprises:
determining one or more cultural and societal customs corresponding to the geographic area associated with the at least one indicia of the context; determining one or more possible meanings of the gesture; mapping the one or more cultural and societal customs to the one or more possible meanings of the gesture to determine a probability of accuracy for each of the one or more possible meanings of the gesture according to the geographic area associated with the at least one indicia of the context indicated; and determining a possible meaning of the gesture having a highest determined probability of accuracy and selected from among the one or more possible meanings of the gestures as the contextual meaning of the gesture. 7. The computer program product of claim 1, wherein the computer program product is executed in a cloud environment as a software as a service. 8. A computer-implemented method, comprising:
receiving a digital media asset comprising a plurality of pixels; performing a first image analysis of the digital media asset at a pixel-based level to determine a gesture embodied within the digital media asset according to a first relationship among the plurality of pixels of the digital media asset; performing a second image analysis of the digital media asset at the pixel-based level to determine a context of the gesture according to a second relationship among the plurality of pixels of the digital media asset; and determining a cultural meaning of the gesture based on the determined gesture and the context of the gesture. 9. The computer-implemented method of claim 8, wherein determining the cultural meaning of the gesture comprises:
determining an area associated with the context of the gesture; determining a plurality of potential meanings of the determined gesture; and determining the cultural meaning of the gesture based on the plurality of potential meanings of the determined gesture and the area associated with the context of the gesture. 10. The computer-implemented method of claim 9, wherein determining the cultural meaning of the gesture based on the plurality of potential meanings of the determined gesture and the area associated with the context of the gesture comprises:
determining a score for each of a plurality of context metadata values based on the content of the digital media asset; mapping each of the plurality of context metadata values to each of the plurality of potential meanings of the determined gesture to associate an accumulated score with each of the plurality of potential meanings of the determined gesture; and selecting the cultural meaning of the gesture as one of the plurality of potential meanings of the determined gesture from among the plurality of potential meanings of the determined gesture based at least in part on a determination that the accumulated score of the one of the plurality of potential meanings of the determined gesture exceeds a threshold. 11. The computer-implemented method of claim 8, wherein the second image analysis is selected from a group consisting of facial recognition, clothing recognition, location identification, temporal identification, and role identification. 12. The computer-implemented method of claim 11, wherein the second image analysis determines one or more facial recognition context metadata values selected from a group consisting of race, gender, and ethnicity. 13. The computer-implemented method of claim 11, wherein the second image analysis determines one or more clothing recognition context metadata values selected from a group consisting of culture, religion, and country. 14. The computer-implemented method of claim 11, wherein the second image analysis determines one or more location identification context metadata values selected from a group consisting of region, country, location, and setting. 15. The computer-implemented method of claim 8, wherein the computer-implemented method is implemented in a cloud environment as a software as a service. 16. A system, comprising a processor configured to:
receive an indication of an occurrence of a human gesture; execute one or more instructions to perform an analysis of the indication of the occurrence of the human gesture to determine contextual criteria having a relationship to the occurrence of the human gesture; determine a meaning of the human gesture based at least in part on the contextual criteria and a plurality of possible intended meanings for the human gesture; and execute an instruction responsive to determining the meaning of the human gesture, wherein at least a portion of the instruction is dependent upon the meaning of the human gesture. 17. The system of claim 16, wherein the indication of the occurrence of the human gesture is a visual image asset, and wherein the contextual criteria is determined based on surroundings of the human gesture in the visual image asset. 18. The system of claim 16, wherein the plurality of possible intended meanings for the human gesture are filtered according to the contextual criteria to determine the meaning of the human gesture. 19. The system of claim 16, wherein the contextual criteria is determined according to a multistep context metadata determination system comprising facial recognition, clothing recognition, location identification, temporal identification, and role identification. 20. The system of claim 16, wherein determining the meaning of the human gesture comprises performing a semantic mapping of the human gesture to the possible intended meanings for the human gesture according to the contextual criteria. | 2,600 |
9,776 | 9,776 | 14,587,182 | 2,691 | A haptic feedback device including a processor that derives touch information including at least one of state information indicating a state of a panel when a plurality of touches are detected or characteristic information indicating a characteristic of at least one of a plurality of objects touching the panel at a plurality of touch positions, and generates driving signals for driving a plurality of actuators to vibrate the panel according to a haptic signal at a first touch position and vibrate the panel at a second touch position more weakly than at the first touch position by using transfer functions of the panel which correspond to the touch information and are from each of the plurality of actuators to the first touch position and the second touch position. | 1. A haptic feedback device which provides haptic feedback to a user by vibrating a panel, the haptic feedback device comprising:
the panel; a plurality of actuators placed at mutually different positions on the panel for vibrating the panel; a detector that detects a plurality of touches in concurrent contact with the panel and detects a plurality of positions, on the panel, of the plurality of touches; a processor that
derives touch information including at least one of state information indicating a state of the panel when the plurality of touches are detected or characteristic information indicating a characteristic of at least one of a plurality of objects touching the panel at the plurality of touch positions;
determines, from among the plurality of touch positions, a first touch position at which to provide haptic feedback by vibration according to a predetermined haptic signal; and
generates driving signals for driving the plurality of actuators to vibrate the panel according to the haptic signal at the first touch position and vibrate the panel at a second touch position included in the plurality of touch positions more weakly than at the first touch position by using transfer functions of the panel from each of the plurality of actuators to the first touch position and the second touch position, the transfer functions corresponding to the touch information,
wherein the plurality of actuators vibrate the panel based on the driving signals. 2. The haptic feedback device according to claim 1,
wherein the touch information includes load information indicating at least one of a plurality of loads applied to the panel at the plurality of touch positions. 3. The haptic feedback device according to claim 1,
wherein the touch information includes contact surface area information indicating at least one of a plurality of contact surface areas between the panel and the plurality of objects at the plurality of touch positions. 4. The haptic feedback device according to claim 1,
wherein the touch information includes hardness information indicating hardness of at least one of the plurality of objects touching the panel at the plurality of touch positions. 5. The haptic feedback device according to claim 1,
wherein the processor further
generates filters for filtering a given haptic signal to generate driving signals for driving the plurality of actuators to vibrate the panel at the first touch position according to the given haptic signal and not vibrate the panel at the second touch position by using the transfer functions,
wherein the driving signals are generated by filtering the haptic signal with the filters. 6. The haptic feedback device according to claim 5,
wherein the filters are generated so that a sum of convolution results, in a time domain, of first transfer functions included in the transfer functions and the filters indicates an impulse, and a sum of convolution results, in the time domain, of second transfer functions included in the transfer functions and the filters indicates zero, the first transfer functions indicating the transfer functions from each of the plurality of actuators to the first touch position and the second transfer functions indicating the transfer functions from each of the plurality of actuators to the second touch position. 7. The haptic feedback device according to claim 5,
wherein the filters are generated so that a sum of products, in a frequency domain, of first transfer functions included in the transfer functions and the filters indicates an impulse, and a sum of products, in the frequency domain, of second transfer functions included in the transfer functions and the filters indicates zero, the first transfer functions indicating the transfer functions from each of the plurality of actuators to the first touch position and the second transfer functions indicating the transfer functions from each of the plurality of actuators to the second touch position. 8. The haptic feedback device according to claim 5,
wherein the filters are generated by using the transfer functions corresponding to information associated with the second touch position among the touch information. 9. The haptic feedback device according to claim 5,
wherein the processor further:
derives a plurality of transfer functions respectively corresponding to a plurality of pieces of touch information similar to the derived touch information;
interpolates a transfer function corresponding to the derived touch information using the plurality of derived transfer functions; and
calculates the filters using the interpolated transfer function. 10. The haptic feedback device according to claim 9,
wherein the interpolated transfer function is interpolated using a linear combination of the plurality of derived transfer functions. 11. The haptic feedback device according to claim 9,
wherein the interpolated transfer function is interpolated by performing polynomial approximation using (i) an amplitude and a phase of each frequency in the plurality of derived transfer functions and (ii) the plurality of pieces of touch information similar to the derived touch information. 12. A haptic feedback method of providing haptic feedback to a user by vibrating a panel with a plurality of actuators placed at mutually different positions on the panel, the haptic feedback method comprising:
detecting a plurality of touches in concurrent contact with the panel and detecting a plurality of positions, on the panel, respectively of the plurality of touches; determining, from among the plurality of touch positions, a first touch position at which to provide haptic feedback by vibration according to a predetermined haptic signal; deriving touch information including at least one of state information indicating a state of the panel when the plurality of touches are detected or characteristic information indicating a characteristic of at least one of a plurality of objects touching the panel at the plurality of touch positions; generating driving signals for driving the plurality of actuators to vibrate the panel according to the haptic signal at the first touch position and vibrate the panel at a second touch position included in the plurality of touch positions more weakly than at the first touch position by using transfer functions of the panel from each of the plurality of actuators to the first touch position and the second touch position, the transfer functions corresponding to the touch information; and driving the plurality of actuators based on the driving signals. 13. A non-transitory computer-readable recording medium for use in a computer, the recording medium having a computer program recorded thereon for causing the computer to execute the haptic feedback method according to claim 12. | A haptic feedback device including a processor that derives touch information including at least one of state information indicating a state of a panel when a plurality of touches are detected or characteristic information indicating a characteristic of at least one of a plurality of objects touching the panel at a plurality of touch positions, and generates driving signals for driving a plurality of actuators to vibrate the panel according to a haptic signal at a first touch position and vibrate the panel at a second touch position more weakly than at the first touch position by using transfer functions of the panel which correspond to the touch information and are from each of the plurality of actuators to the first touch position and the second touch position.1. A haptic feedback device which provides haptic feedback to a user by vibrating a panel, the haptic feedback device comprising:
the panel; a plurality of actuators placed at mutually different positions on the panel for vibrating the panel; a detector that detects a plurality of touches in concurrent contact with the panel and detects a plurality of positions, on the panel, of the plurality of touches; a processor that
derives touch information including at least one of state information indicating a state of the panel when the plurality of touches are detected or characteristic information indicating a characteristic of at least one of a plurality of objects touching the panel at the plurality of touch positions;
determines, from among the plurality of touch positions, a first touch position at which to provide haptic feedback by vibration according to a predetermined haptic signal; and
generates driving signals for driving the plurality of actuators to vibrate the panel according to the haptic signal at the first touch position and vibrate the panel at a second touch position included in the plurality of touch positions more weakly than at the first touch position by using transfer functions of the panel from each of the plurality of actuators to the first touch position and the second touch position, the transfer functions corresponding to the touch information,
wherein the plurality of actuators vibrate the panel based on the driving signals. 2. The haptic feedback device according to claim 1,
wherein the touch information includes load information indicating at least one of a plurality of loads applied to the panel at the plurality of touch positions. 3. The haptic feedback device according to claim 1,
wherein the touch information includes contact surface area information indicating at least one of a plurality of contact surface areas between the panel and the plurality of objects at the plurality of touch positions. 4. The haptic feedback device according to claim 1,
wherein the touch information includes hardness information indicating hardness of at least one of the plurality of objects touching the panel at the plurality of touch positions. 5. The haptic feedback device according to claim 1,
wherein the processor further
generates filters for filtering a given haptic signal to generate driving signals for driving the plurality of actuators to vibrate the panel at the first touch position according to the given haptic signal and not vibrate the panel at the second touch position by using the transfer functions,
wherein the driving signals are generated by filtering the haptic signal with the filters. 6. The haptic feedback device according to claim 5,
wherein the filters are generated so that a sum of convolution results, in a time domain, of first transfer functions included in the transfer functions and the filters indicates an impulse, and a sum of convolution results, in the time domain, of second transfer functions included in the transfer functions and the filters indicates zero, the first transfer functions indicating the transfer functions from each of the plurality of actuators to the first touch position and the second transfer functions indicating the transfer functions from each of the plurality of actuators to the second touch position. 7. The haptic feedback device according to claim 5,
wherein the filters are generated so that a sum of products, in a frequency domain, of first transfer functions included in the transfer functions and the filters indicates an impulse, and a sum of products, in the frequency domain, of second transfer functions included in the transfer functions and the filters indicates zero, the first transfer functions indicating the transfer functions from each of the plurality of actuators to the first touch position and the second transfer functions indicating the transfer functions from each of the plurality of actuators to the second touch position. 8. The haptic feedback device according to claim 5,
wherein the filters are generated by using the transfer functions corresponding to information associated with the second touch position among the touch information. 9. The haptic feedback device according to claim 5,
wherein the processor further:
derives a plurality of transfer functions respectively corresponding to a plurality of pieces of touch information similar to the derived touch information;
interpolates a transfer function corresponding to the derived touch information using the plurality of derived transfer functions; and
calculates the filters using the interpolated transfer function. 10. The haptic feedback device according to claim 9,
wherein the interpolated transfer function is interpolated using a linear combination of the plurality of derived transfer functions. 11. The haptic feedback device according to claim 9,
wherein the interpolated transfer function is interpolated by performing polynomial approximation using (i) an amplitude and a phase of each frequency in the plurality of derived transfer functions and (ii) the plurality of pieces of touch information similar to the derived touch information. 12. A haptic feedback method of providing haptic feedback to a user by vibrating a panel with a plurality of actuators placed at mutually different positions on the panel, the haptic feedback method comprising:
detecting a plurality of touches in concurrent contact with the panel and detecting a plurality of positions, on the panel, respectively of the plurality of touches; determining, from among the plurality of touch positions, a first touch position at which to provide haptic feedback by vibration according to a predetermined haptic signal; deriving touch information including at least one of state information indicating a state of the panel when the plurality of touches are detected or characteristic information indicating a characteristic of at least one of a plurality of objects touching the panel at the plurality of touch positions; generating driving signals for driving the plurality of actuators to vibrate the panel according to the haptic signal at the first touch position and vibrate the panel at a second touch position included in the plurality of touch positions more weakly than at the first touch position by using transfer functions of the panel from each of the plurality of actuators to the first touch position and the second touch position, the transfer functions corresponding to the touch information; and driving the plurality of actuators based on the driving signals. 13. A non-transitory computer-readable recording medium for use in a computer, the recording medium having a computer program recorded thereon for causing the computer to execute the haptic feedback method according to claim 12. | 2,600 |
9,777 | 9,777 | 13,917,603 | 2,613 | A multi-step animation sequence for smoothly transitioning from a map view to a panorama view of a specified location is disclosed. An orientation overlay can be displayed on the panorama, showing a direction and angular extent of the field of view of the panorama. An initial specified location and a current location of the panorama can also be displayed on the orientation overlay. A navigable placeholder panorama to be displayed in place of a panorama at the specified location when panorama data is not available is disclosed. A perspective view of a street name annotation can be laid on the surface of a street in the panorama. | 1. A computer-implemented method, comprising:
presenting a map containing a specified location on a display; receiving user input requesting a panorama view of the specified location; and presenting an animated sequence transitioning from the map to the panorama view of the specified location, where the animated sequence comprises:
zooming into the specified location on the map;
transitioning from the zoomed map to a panorama with a field of view showing a street surface at the specified location; and
spinning the panorama such that the field of view tilts up from the street surface to the horizon. 2. The method of claim 1, wherein
the map and the panorama are both displayed in a landscape orientation. 3. The method of claim 2, further comprising:
upon completion of the animated sequence, presenting an orientation overlay on the panorama, where the orientation overlay indicates a direction and an angular extent of the field of view. 4. The method of claim 1, wherein
the map and the panorama are both displayed in a portrait orientation, and the method further comprises: upon completion of the animated sequence, receiving a second user input rotating the display to a landscape orientation; and presenting an orientation overlay on the panorama, where the orientation overlay indicates a direction and an angular extent of the field of view. 5. The method of claim 1, wherein
presenting a map showing a specified location further comprises presenting a visual indicator at the specified location on the map; and presenting a user interface element for invoking the panorama view of the specified location. 6. The method of claim 5, wherein
the visual indicator depicts a push pin; and the user interface element displays a street address of the specified location. 7. The method of claim 1, further comprising:
presenting a perspective view of a street name annotation laid on the street surface in the panorama. 8. The method of claim 1, further comprising:
presenting a perspective view of a semi-transparent ribbon with embedded street name text, the semi-transparent ribbon being laid on the street surface along the direction of a street in the panorama. 9. The method of claim 1, further comprising:
presenting a perspective view of a navigation indicator laid on the street surface in the panorama, where a user input directed to the navigation indicator causes the panorama to advance in a direction pointed by the navigation indicator. 10. The method of claim 1, further comprising:
receiving a notification that no panorama for the specified location is available; presenting the animated sequence using a placeholder panorama in place of the panorama; upon completion of the animated sequence, presenting an orientation overlay on the placeholder panorama, where the orientation overlay indicates a direction and an angular extent of the field of view of the placeholder panorama; and presenting a perspective view of a street name annotation and a perspective view of a navigation indicator pointing in the direction shown in the orientation overlay. 11. The method of claim 1, where the display is a touch-sensitive display responsive to a multi-touch gesture. 12. A computer-implemented method, comprising:
presenting a street panorama of a specified street location; and presenting an orientation overlay on the street panorama, where the orientation overlay indicates a direction and an angular extent of a field of view of the street panorama on a portion of a street map. 13. The method of claim 12, wherein
the orientation overlay comprises a visual indicator identifying the specified street location on the portion of the street map. 14. The method of claim 12, further comprising:
receiving a user input changing the field of view of the street panorama; and updating the orientation overlay to reflect changes in the direction or angular extent of the filed of view. 15. The method of claim 14, wherein
presenting an orientation overlay on the street panorama further comprises: presenting a pie-shaped indicator where an angle of a pie slice in the pie-shaped indicator opens in the direction of the field of view and has a size based on the angular extent of the field of view. 16. The method of claim 12, further comprising:
presenting a second street panorama of a second street location; and updating the orientation overlay based on the second street panorama. 17. The method of claim 16, wherein
presenting an orientation overlay on the street panorama further comprises: presenting a pie-shaped indicator where a vertex of a pie slice in the pie-shaped indicator overlaps with the specified street location of the street panorama on the street map; and wherein updating the orientation overlay further comprises: showing a different portion of the street map such that the vertex of the pie slice overlaps with the second street location on the different portion of the street map. 18. The method of claim 12, further comprising:
presenting a user interface element on the street panorama in response to a user input, where the user interface element shows a street address corresponding to the street location of the street panorama. 19. The method of claim 18, further comprising:
presenting a second street panorama of a second street location; and updating the user interface element to show a second street address corresponding to the second street panorama. 20. The method of claim 12, where the display is a touch-sensitive display responsive to a multi-touch gesture. | A multi-step animation sequence for smoothly transitioning from a map view to a panorama view of a specified location is disclosed. An orientation overlay can be displayed on the panorama, showing a direction and angular extent of the field of view of the panorama. An initial specified location and a current location of the panorama can also be displayed on the orientation overlay. A navigable placeholder panorama to be displayed in place of a panorama at the specified location when panorama data is not available is disclosed. A perspective view of a street name annotation can be laid on the surface of a street in the panorama.1. A computer-implemented method, comprising:
presenting a map containing a specified location on a display; receiving user input requesting a panorama view of the specified location; and presenting an animated sequence transitioning from the map to the panorama view of the specified location, where the animated sequence comprises:
zooming into the specified location on the map;
transitioning from the zoomed map to a panorama with a field of view showing a street surface at the specified location; and
spinning the panorama such that the field of view tilts up from the street surface to the horizon. 2. The method of claim 1, wherein
the map and the panorama are both displayed in a landscape orientation. 3. The method of claim 2, further comprising:
upon completion of the animated sequence, presenting an orientation overlay on the panorama, where the orientation overlay indicates a direction and an angular extent of the field of view. 4. The method of claim 1, wherein
the map and the panorama are both displayed in a portrait orientation, and the method further comprises: upon completion of the animated sequence, receiving a second user input rotating the display to a landscape orientation; and presenting an orientation overlay on the panorama, where the orientation overlay indicates a direction and an angular extent of the field of view. 5. The method of claim 1, wherein
presenting a map showing a specified location further comprises presenting a visual indicator at the specified location on the map; and presenting a user interface element for invoking the panorama view of the specified location. 6. The method of claim 5, wherein
the visual indicator depicts a push pin; and the user interface element displays a street address of the specified location. 7. The method of claim 1, further comprising:
presenting a perspective view of a street name annotation laid on the street surface in the panorama. 8. The method of claim 1, further comprising:
presenting a perspective view of a semi-transparent ribbon with embedded street name text, the semi-transparent ribbon being laid on the street surface along the direction of a street in the panorama. 9. The method of claim 1, further comprising:
presenting a perspective view of a navigation indicator laid on the street surface in the panorama, where a user input directed to the navigation indicator causes the panorama to advance in a direction pointed by the navigation indicator. 10. The method of claim 1, further comprising:
receiving a notification that no panorama for the specified location is available; presenting the animated sequence using a placeholder panorama in place of the panorama; upon completion of the animated sequence, presenting an orientation overlay on the placeholder panorama, where the orientation overlay indicates a direction and an angular extent of the field of view of the placeholder panorama; and presenting a perspective view of a street name annotation and a perspective view of a navigation indicator pointing in the direction shown in the orientation overlay. 11. The method of claim 1, where the display is a touch-sensitive display responsive to a multi-touch gesture. 12. A computer-implemented method, comprising:
presenting a street panorama of a specified street location; and presenting an orientation overlay on the street panorama, where the orientation overlay indicates a direction and an angular extent of a field of view of the street panorama on a portion of a street map. 13. The method of claim 12, wherein
the orientation overlay comprises a visual indicator identifying the specified street location on the portion of the street map. 14. The method of claim 12, further comprising:
receiving a user input changing the field of view of the street panorama; and updating the orientation overlay to reflect changes in the direction or angular extent of the filed of view. 15. The method of claim 14, wherein
presenting an orientation overlay on the street panorama further comprises: presenting a pie-shaped indicator where an angle of a pie slice in the pie-shaped indicator opens in the direction of the field of view and has a size based on the angular extent of the field of view. 16. The method of claim 12, further comprising:
presenting a second street panorama of a second street location; and updating the orientation overlay based on the second street panorama. 17. The method of claim 16, wherein
presenting an orientation overlay on the street panorama further comprises: presenting a pie-shaped indicator where a vertex of a pie slice in the pie-shaped indicator overlaps with the specified street location of the street panorama on the street map; and wherein updating the orientation overlay further comprises: showing a different portion of the street map such that the vertex of the pie slice overlaps with the second street location on the different portion of the street map. 18. The method of claim 12, further comprising:
presenting a user interface element on the street panorama in response to a user input, where the user interface element shows a street address corresponding to the street location of the street panorama. 19. The method of claim 18, further comprising:
presenting a second street panorama of a second street location; and updating the user interface element to show a second street address corresponding to the second street panorama. 20. The method of claim 12, where the display is a touch-sensitive display responsive to a multi-touch gesture. | 2,600 |
9,778 | 9,778 | 15,095,571 | 2,616 | The present invention relates to a mapping and comparing choroplethic housing statistics. In one example, this comprises accessing property data corresponding to a geospatial area. Analytics are used to generate usable property data statistics from the accessed property data. A thematic map image based on the usable property data statistics is then generated according to comparison categories, so that the thematic map image may be displayed. | 1-19. (canceled) 20. A method for mapping and comparing choroplethic housing statistics, comprising:
accessing property data corresponding to a geospatial area to generate property data statistics; generating, according to at least one of a set of comparison categories, a thematic map image based on dispersing the property data statistics across the geospatial area in accordance with an aggregation level; and displaying on a display device the thematic map image. 21. The method of claim 20, wherein the geospatial area is alterable based on an identified statistical region. 22. The method of claim 20, wherein property data includes financial and housing market data, and wherein accessing property data includes gathering financial and housing market data from an external database. 23. The method of claim 20, wherein dispersing the property data statistics comprises utilizing a set of bins that identify a rank for sections of the aggregation level. 24. The method of claim 23, wherein the thematic map image includes a set of choroplethic housing statistic map images displaying pattern areas that are patterned in proportion to the set of bins. 25. The method of claim 24, wherein displaying on a display device the thematic map image includes displaying the set of choroplethic housing statistic map images in a side-by-side orientation. 26. The method of claim 20, wherein when section of the aggregation levels within the thematic map image is selected, a new thematic map image is generated and displayed. 27. The method of claim 26, wherein the new thematic map image is generated and displayed in a side-by-side orientation with the thematic map image, and wherein the thematic map image resizes to accommodate the side-by-side orientation. 28. The method of claim 20, further comprising:
performing analytics on the property data to generate the property data statistics including deriving statistics that are not directly reported by the property data. 29. The method of claim 28, wherein performing analytics further includes performing a usability calculation on the property data to produce clean statistics. 30. The method of claim 20, wherein the thematic map image includes a set of choroplethic housing statistic map images that are subjected to an identified time range. 31. The method of claim 20, further comprising:
generating a table corresponding to the thematic map image. 32. The method of claim 31, wherein generating the table is based on combining the geospatial area and the property data statistics, and wherein, after the table is generated, additional property data is added to the table in accordance with the aggregation levels. 33. The method of claim 20, wherein the aggregation levels include the geographic aggregation levels of region, state, metropolitan statistical area, county, zip code, and census tract. 34. A non-transitory computer readable medium storing program code executable by a processor to perform operations comprising:
accessing property data corresponding to a geospatial area to generate property data statistics; generating, according to at least one of a set of comparison categories, a thematic map image based on dispersing the property data statistics across the geospatial area in accordance with an aggregation level; and displaying on a display device the thematic map image. 35. The computer readable medium of claim 34, wherein the geospatial area is alterable based on an identified statistical region. 36. The computer readable medium of claim 34, wherein property data includes financial and housing market data, and wherein accessing property data includes gathering financial and housing market data from an external database. 37. The computer readable medium of claim 34, wherein dispersing the property data statistics comprises utilizing a set of bins that identify a rank for sections of the aggregation level. 38. The computer readable medium of claim 37, wherein the thematic map image includes a set of choroplethic housing statistic map images displaying pattern areas that are patterned in proportion to the set of bins. 39. The computer readable medium of claim 38, wherein displaying on a display device the thematic map image includes displaying the set of choroplethic housing statistic map images in a side-by-side orientation. | The present invention relates to a mapping and comparing choroplethic housing statistics. In one example, this comprises accessing property data corresponding to a geospatial area. Analytics are used to generate usable property data statistics from the accessed property data. A thematic map image based on the usable property data statistics is then generated according to comparison categories, so that the thematic map image may be displayed.1-19. (canceled) 20. A method for mapping and comparing choroplethic housing statistics, comprising:
accessing property data corresponding to a geospatial area to generate property data statistics; generating, according to at least one of a set of comparison categories, a thematic map image based on dispersing the property data statistics across the geospatial area in accordance with an aggregation level; and displaying on a display device the thematic map image. 21. The method of claim 20, wherein the geospatial area is alterable based on an identified statistical region. 22. The method of claim 20, wherein property data includes financial and housing market data, and wherein accessing property data includes gathering financial and housing market data from an external database. 23. The method of claim 20, wherein dispersing the property data statistics comprises utilizing a set of bins that identify a rank for sections of the aggregation level. 24. The method of claim 23, wherein the thematic map image includes a set of choroplethic housing statistic map images displaying pattern areas that are patterned in proportion to the set of bins. 25. The method of claim 24, wherein displaying on a display device the thematic map image includes displaying the set of choroplethic housing statistic map images in a side-by-side orientation. 26. The method of claim 20, wherein when section of the aggregation levels within the thematic map image is selected, a new thematic map image is generated and displayed. 27. The method of claim 26, wherein the new thematic map image is generated and displayed in a side-by-side orientation with the thematic map image, and wherein the thematic map image resizes to accommodate the side-by-side orientation. 28. The method of claim 20, further comprising:
performing analytics on the property data to generate the property data statistics including deriving statistics that are not directly reported by the property data. 29. The method of claim 28, wherein performing analytics further includes performing a usability calculation on the property data to produce clean statistics. 30. The method of claim 20, wherein the thematic map image includes a set of choroplethic housing statistic map images that are subjected to an identified time range. 31. The method of claim 20, further comprising:
generating a table corresponding to the thematic map image. 32. The method of claim 31, wherein generating the table is based on combining the geospatial area and the property data statistics, and wherein, after the table is generated, additional property data is added to the table in accordance with the aggregation levels. 33. The method of claim 20, wherein the aggregation levels include the geographic aggregation levels of region, state, metropolitan statistical area, county, zip code, and census tract. 34. A non-transitory computer readable medium storing program code executable by a processor to perform operations comprising:
accessing property data corresponding to a geospatial area to generate property data statistics; generating, according to at least one of a set of comparison categories, a thematic map image based on dispersing the property data statistics across the geospatial area in accordance with an aggregation level; and displaying on a display device the thematic map image. 35. The computer readable medium of claim 34, wherein the geospatial area is alterable based on an identified statistical region. 36. The computer readable medium of claim 34, wherein property data includes financial and housing market data, and wherein accessing property data includes gathering financial and housing market data from an external database. 37. The computer readable medium of claim 34, wherein dispersing the property data statistics comprises utilizing a set of bins that identify a rank for sections of the aggregation level. 38. The computer readable medium of claim 37, wherein the thematic map image includes a set of choroplethic housing statistic map images displaying pattern areas that are patterned in proportion to the set of bins. 39. The computer readable medium of claim 38, wherein displaying on a display device the thematic map image includes displaying the set of choroplethic housing statistic map images in a side-by-side orientation. | 2,600 |
9,779 | 9,779 | 14,097,954 | 2,656 | An embodiment provides a method, including: receiving, at an audio receiver of an information handling device, user voice input; identifying, using a processor, words included in the user voice input; determining, using the processor, one of the identified words renders ambiguous a command included in the user voice input; accessing, using the processor, context data; disambiguating, using the processor, the command based on the context data; and committing, using the processor, a predetermined action according to the command. Other aspects are described and claimed. | 1. A method, comprising:
receiving, at an audio receiver of an information handling device, user voice input; identifying, using a processor, words included in the user voice input; determining, using the processor, at least one of the identified words renders ambiguous a command included in the user voice input; accessing, using the processor, context data; disambiguating, using the processor, the command based on the context data; and committing, using the processor, a predetermined action according to the command. 2. The method of claim 1, wherein the context data is derived from the voice input. 3. The method of claim 2, wherein the context data derived from the voice input includes an identified word included in the user voice input selected from the group of words consisting of a contact and an application name. 4. The method of claim 1, wherein the context data is derived from a list of open applications on the information handling device. 5. The method of claim 1, wherein the context data is derived from a list of most recently used applications on the information handling device. 6. The method of claim 1, wherein the context data is derived from a list of most recently used objects on the information handling device. 7. The method of claim 1, wherein the disambiguating comprises associating a context data item with the identified word rendering the command ambiguous. 8. The method of claim 7, wherein the associating comprises linking a device object to the identified word rendering the command ambiguous using the context data item. 9. The method of claim 8, further comprising replacing the identified word rendering the command ambiguous with a device object identifier. 10. The method of claim 9, wherein the device object identifier is a file name pointing to the device object subject to the command. 11. An information handling device, comprising:
an audio receiver; a processor; and a memory device that stores instructions executable by the processor to: receive, at the audio receiver of an information handling device, user voice input; identify words included in the user voice input; determine one of the identified words renders ambiguous a command included in the user voice input; access context data; disambiguate the command based on the context data; and commit a predetermined action according to the command. 12. The information handling device of claim 11, wherein the context data is derived from the voice input. 13. The information handling device of claim 12, wherein the context data derived from the voice input includes an identified word included in the user voice input selected from the group of words consisting of a contact and an application name. 14. The information handling device of claim 11, wherein the context data is derived from a list of open applications on the information handling device. 15. The information handling device of claim 11, wherein the context data is derived from a list of most recently used applications on the information handling device. 16. The information handling device of claim 11, wherein the context data is derived from a list of most recently used objects on the information handling device. 17. The information handling device of claim 11, wherein the disambiguating comprises associating a context data item with the identified word rendering the command ambiguous. 18. The information handling device of claim 7, wherein the associating comprises linking a device object to the identified word rendering the command ambiguous using the context data item. 19. The information handling device of claim 8, further comprising replacing the identified word rendering the command ambiguous with a device object identifier. 20. A product, comprising:
a storage device having code stored therewith, the code comprising: code that receives at an audio receiver of an information handling device, user voice input; code that identifies, using a processor, words included in the user voice input; code that determines, using the processor, one of the identified words renders ambiguous a command included in the user voice input; code that accesses, using the processor, context data; code that disambiguates, using the processor, the command based on the context data; and code that commits, using the processor, a predetermined action according to the command. | An embodiment provides a method, including: receiving, at an audio receiver of an information handling device, user voice input; identifying, using a processor, words included in the user voice input; determining, using the processor, one of the identified words renders ambiguous a command included in the user voice input; accessing, using the processor, context data; disambiguating, using the processor, the command based on the context data; and committing, using the processor, a predetermined action according to the command. Other aspects are described and claimed.1. A method, comprising:
receiving, at an audio receiver of an information handling device, user voice input; identifying, using a processor, words included in the user voice input; determining, using the processor, at least one of the identified words renders ambiguous a command included in the user voice input; accessing, using the processor, context data; disambiguating, using the processor, the command based on the context data; and committing, using the processor, a predetermined action according to the command. 2. The method of claim 1, wherein the context data is derived from the voice input. 3. The method of claim 2, wherein the context data derived from the voice input includes an identified word included in the user voice input selected from the group of words consisting of a contact and an application name. 4. The method of claim 1, wherein the context data is derived from a list of open applications on the information handling device. 5. The method of claim 1, wherein the context data is derived from a list of most recently used applications on the information handling device. 6. The method of claim 1, wherein the context data is derived from a list of most recently used objects on the information handling device. 7. The method of claim 1, wherein the disambiguating comprises associating a context data item with the identified word rendering the command ambiguous. 8. The method of claim 7, wherein the associating comprises linking a device object to the identified word rendering the command ambiguous using the context data item. 9. The method of claim 8, further comprising replacing the identified word rendering the command ambiguous with a device object identifier. 10. The method of claim 9, wherein the device object identifier is a file name pointing to the device object subject to the command. 11. An information handling device, comprising:
an audio receiver; a processor; and a memory device that stores instructions executable by the processor to: receive, at the audio receiver of an information handling device, user voice input; identify words included in the user voice input; determine one of the identified words renders ambiguous a command included in the user voice input; access context data; disambiguate the command based on the context data; and commit a predetermined action according to the command. 12. The information handling device of claim 11, wherein the context data is derived from the voice input. 13. The information handling device of claim 12, wherein the context data derived from the voice input includes an identified word included in the user voice input selected from the group of words consisting of a contact and an application name. 14. The information handling device of claim 11, wherein the context data is derived from a list of open applications on the information handling device. 15. The information handling device of claim 11, wherein the context data is derived from a list of most recently used applications on the information handling device. 16. The information handling device of claim 11, wherein the context data is derived from a list of most recently used objects on the information handling device. 17. The information handling device of claim 11, wherein the disambiguating comprises associating a context data item with the identified word rendering the command ambiguous. 18. The information handling device of claim 7, wherein the associating comprises linking a device object to the identified word rendering the command ambiguous using the context data item. 19. The information handling device of claim 8, further comprising replacing the identified word rendering the command ambiguous with a device object identifier. 20. A product, comprising:
a storage device having code stored therewith, the code comprising: code that receives at an audio receiver of an information handling device, user voice input; code that identifies, using a processor, words included in the user voice input; code that determines, using the processor, one of the identified words renders ambiguous a command included in the user voice input; code that accesses, using the processor, context data; code that disambiguates, using the processor, the command based on the context data; and code that commits, using the processor, a predetermined action according to the command. | 2,600 |
9,780 | 9,780 | 15,007,699 | 2,658 | A method for expanding an initial ontology via processing of communication data, wherein the initial ontology is a structural representation of language elements comprising a set of entities, a set of terms, a set of term-entity associations, a set of entity-association rules, a set of abstract relations, and a set of relation instances. A method for extracting a set of significant phrases and a set of significant phrase co-occurrences from an input set of documents further includes utilizing the terms to identify relations within the training set of communication data, wherein a relation is a pair of terms that appear in proximity to one another. | 1. (canceled) 2. (canceled) 3. A method for extracting a set of significant phrases from an input set of documents and an input generic language model, the method comprising:
accepting a generic language model and a set of documents as inputs; generating a source-specific language model by at least: subdividing each document into meaning units, each meaning unit comprising one or more n-grams, counting the n-grams of each meaning unit up to a predetermined order, and once the n-grams are counted, language-model probabilities are estimated based on the counts and the source-specific language model is obtained; accumulating phrase candidates by creating a set of candidates where each candidate is an n-gram, wherein creating the set of candidates comprises computing a prominence score for each n-gram of each meaning unit, and if the prominence score of a given n-gram is above a prominence score threshold and if the given n-gram is not an unigram, then a stickiness score for the given n-gram is calculated, wherein the prominence score is calculated using both the generic language model and the source-specific language model, and wherein the stickiness score is calculated using the source-specific language model; and filtering the candidate phrases by at least calculating a frequency for each of the candidate phrases, calculating an overall phrase score for each of the candidate phrases, and selecting as significant those phrases of the candidate phrases for which the overall phrase scores are above a threshold phrase score. 4. The method of claim 3, wherein the given n-gram is discarded if the prominence score of the given n-gram is below the prominence score threshold. 5. The method of claim 3, wherein the given n-gram is discarded if the stickiness score of the given n-gram is below a stickiness score threshold. 6. The method of claim 3, wherein the predetermined order is three. 7. The method of claim 3, wherein the predetermined order is four. 8. The method of claim 3, wherein the source-specific language model is obtained by applying a smoothing technique. 9. The method of claim 1 further comprising extracting significant phrase co-occurrences from the input set of documents by at least:
iterating over the meaning units and locating occurrences of individual phrases;
counting co-occurrences of pairs of phrases in a given meaning unit;
computing, based on the count of co-occurrences, a probability of a phrase and a probability of the co-occurrence of a pair of phrases in the given meaning unit;
calculating a log-likelihood of the co-occurrence of the pair of phrases using both the probability of the phrase and the probability of the co-occurrence of the pair of phrases; and
identifying a significant co-occurrence of the pair of phrases if the log-likelihood of the pair is above a predetermined log-likelihood threshold. 10. A system for extracting a set of significant phrases from an input set of documents and an input generic language model, the system comprising:
a processor; and a memory coupled to the processor, the memory storing instructions which when executed by the processor cause the system to perform a method comprising:
accepting a generic language model and a set of documents as inputs;
generating a source-specific language model by at least: subdividing each document into meaning units, each meaning unit comprising one or more n-grams, counting the n-grams of each meaning unit up to a predetermined order, and once the n-grams are counted, language-model probabilities are estimated based on the counts and the source-specific language model is obtained;
accumulating phrase candidates by creating a set of candidates where each candidate is an n-gram, wherein creating the set of candidates comprises computing a prominence score for each n-gram of each meaning unit, and if the prominence score of a given n-gram is above a prominence score threshold and if the given n-gram is not an unigram, then a stickiness score for the given n-gram is calculated, wherein the prominence score is calculated using both the generic language model and the source-specific language model, and wherein the stickiness score is calculated using the source-specific language model; and
filtering the candidate phrases by at least calculating a frequency for each of the candidate phrases, calculating an overall phrase score for each of the candidate phrases, and selecting as significant those phrases of the candidate phrases for which the overall phrase scores are above a threshold phrase score. 11. The system of claim 10, wherein the given n-gram is discarded by the processor if the prominence score of the given n-gram is below the prominence score threshold. 12. The system of claim 10, wherein the given n-gram is discarded by the processor if the stickiness score of the given n-gram is below a stickiness score threshold. 13. The system of claim 10, wherein the predetermined order is three. 14. The system of claim 10, wherein the predetermined order is four. 15. The system of claim 10, wherein the source-specific language model is obtained by applying a smoothing technique. 16. The system of claim 10, wherein the method further comprises comprising extracting significant phrase co-occurrences from the input set of documents by at least:
iterating over the meaning units and locating occurrences of individual phrases; counting co-occurrences of pairs of phrases in a given meaning unit; computing, based on the count of co-occurrences, a probability of a phrase and a probability of the co-occurrence of a pair of phrases in the given meaning unit; calculating a log-likelihood of the co-occurrence of the pair of phrases using both the probability of the phrase and the probability of the co-occurrence of the pair of phrases; and identifying a significant co-occurrence of the pair of phrases if the log-likelihood of the pair is over a predetermined log-likelihood threshold. 17. A non-transitory computer-readable medium having stored thereon a sequence of instructions that when executed by a system causes the system to perform a method comprising:
accepting a generic language model and a set of documents as inputs; generating a source-specific language model by at least: subdividing each document into meaning units, each meaning unit comprising one or more n-grams, counting the n-grams of each meaning unit up to a predetermined order, and once the n-grams are counted, language-model probabilities are estimated based on the counts and the source-specific language model is obtained; accumulating phrase candidates by creating a set of candidates where each candidate is an n-gram, wherein creating the set of candidates comprises computing a prominence score for each n-gram of each meaning unit, and if the prominence score of a given n-gram is above a prominence score threshold and if the given n-gram is not an unigram, then a stickiness score for the given n-gram is calculated, wherein the prominence score is calculated using both the generic language model and the source-specific language model, and wherein the stickiness score is calculated using the source-specific language model; and filtering the candidate phrases by at least calculating a frequency for each of the candidate phrases, calculating an overall phrase score for each of the candidate phrases, and selecting as significant those phrases of the candidate phrases for which the overall phrase scores are above a threshold phrase score. 18. The non-transitory computer-readable medium of claim 17, wherein the given n-gram is discarded by the processor if the prominence score of the given n-gram is below the prominence score threshold. 19. The non-transitory computer-readable medium of claim 17, wherein the given n-gram is discarded by the processor if the stickiness score of the given n-gram is below a stickiness score threshold. 20. The non-transitory computer-readable medium of claim 17, wherein the predetermined order is three or four. 21. The non-transitory computer-readable medium of claim 17, wherein the source-specific language model is obtained by applying a smoothing technique. 22. The non-transitory computer-readable medium of claim 17, wherein the method further comprises comprising extracting significant phrase co-occurrences from the input set of documents by at least:
iterating over the meaning units and locating occurrences of individual phrases; counting co-occurrences of pairs of phrases in a given meaning unit; computing, based on the count of co-occurrences, a probability of a phrase and a probability of the co-occurrence of a pair of phrases in the given meaning unit; calculating a log-likelihood of the co-occurrence of the pair of phrases using both the probability of the phrase and the probability of the co-occurrence of the pair of phrases; and identifying a significant co-occurrence of the pair of phrases if the log-likelihood of the pair is over a predetermined log-likelihood threshold. | A method for expanding an initial ontology via processing of communication data, wherein the initial ontology is a structural representation of language elements comprising a set of entities, a set of terms, a set of term-entity associations, a set of entity-association rules, a set of abstract relations, and a set of relation instances. A method for extracting a set of significant phrases and a set of significant phrase co-occurrences from an input set of documents further includes utilizing the terms to identify relations within the training set of communication data, wherein a relation is a pair of terms that appear in proximity to one another.1. (canceled) 2. (canceled) 3. A method for extracting a set of significant phrases from an input set of documents and an input generic language model, the method comprising:
accepting a generic language model and a set of documents as inputs; generating a source-specific language model by at least: subdividing each document into meaning units, each meaning unit comprising one or more n-grams, counting the n-grams of each meaning unit up to a predetermined order, and once the n-grams are counted, language-model probabilities are estimated based on the counts and the source-specific language model is obtained; accumulating phrase candidates by creating a set of candidates where each candidate is an n-gram, wherein creating the set of candidates comprises computing a prominence score for each n-gram of each meaning unit, and if the prominence score of a given n-gram is above a prominence score threshold and if the given n-gram is not an unigram, then a stickiness score for the given n-gram is calculated, wherein the prominence score is calculated using both the generic language model and the source-specific language model, and wherein the stickiness score is calculated using the source-specific language model; and filtering the candidate phrases by at least calculating a frequency for each of the candidate phrases, calculating an overall phrase score for each of the candidate phrases, and selecting as significant those phrases of the candidate phrases for which the overall phrase scores are above a threshold phrase score. 4. The method of claim 3, wherein the given n-gram is discarded if the prominence score of the given n-gram is below the prominence score threshold. 5. The method of claim 3, wherein the given n-gram is discarded if the stickiness score of the given n-gram is below a stickiness score threshold. 6. The method of claim 3, wherein the predetermined order is three. 7. The method of claim 3, wherein the predetermined order is four. 8. The method of claim 3, wherein the source-specific language model is obtained by applying a smoothing technique. 9. The method of claim 1 further comprising extracting significant phrase co-occurrences from the input set of documents by at least:
iterating over the meaning units and locating occurrences of individual phrases;
counting co-occurrences of pairs of phrases in a given meaning unit;
computing, based on the count of co-occurrences, a probability of a phrase and a probability of the co-occurrence of a pair of phrases in the given meaning unit;
calculating a log-likelihood of the co-occurrence of the pair of phrases using both the probability of the phrase and the probability of the co-occurrence of the pair of phrases; and
identifying a significant co-occurrence of the pair of phrases if the log-likelihood of the pair is above a predetermined log-likelihood threshold. 10. A system for extracting a set of significant phrases from an input set of documents and an input generic language model, the system comprising:
a processor; and a memory coupled to the processor, the memory storing instructions which when executed by the processor cause the system to perform a method comprising:
accepting a generic language model and a set of documents as inputs;
generating a source-specific language model by at least: subdividing each document into meaning units, each meaning unit comprising one or more n-grams, counting the n-grams of each meaning unit up to a predetermined order, and once the n-grams are counted, language-model probabilities are estimated based on the counts and the source-specific language model is obtained;
accumulating phrase candidates by creating a set of candidates where each candidate is an n-gram, wherein creating the set of candidates comprises computing a prominence score for each n-gram of each meaning unit, and if the prominence score of a given n-gram is above a prominence score threshold and if the given n-gram is not an unigram, then a stickiness score for the given n-gram is calculated, wherein the prominence score is calculated using both the generic language model and the source-specific language model, and wherein the stickiness score is calculated using the source-specific language model; and
filtering the candidate phrases by at least calculating a frequency for each of the candidate phrases, calculating an overall phrase score for each of the candidate phrases, and selecting as significant those phrases of the candidate phrases for which the overall phrase scores are above a threshold phrase score. 11. The system of claim 10, wherein the given n-gram is discarded by the processor if the prominence score of the given n-gram is below the prominence score threshold. 12. The system of claim 10, wherein the given n-gram is discarded by the processor if the stickiness score of the given n-gram is below a stickiness score threshold. 13. The system of claim 10, wherein the predetermined order is three. 14. The system of claim 10, wherein the predetermined order is four. 15. The system of claim 10, wherein the source-specific language model is obtained by applying a smoothing technique. 16. The system of claim 10, wherein the method further comprises comprising extracting significant phrase co-occurrences from the input set of documents by at least:
iterating over the meaning units and locating occurrences of individual phrases; counting co-occurrences of pairs of phrases in a given meaning unit; computing, based on the count of co-occurrences, a probability of a phrase and a probability of the co-occurrence of a pair of phrases in the given meaning unit; calculating a log-likelihood of the co-occurrence of the pair of phrases using both the probability of the phrase and the probability of the co-occurrence of the pair of phrases; and identifying a significant co-occurrence of the pair of phrases if the log-likelihood of the pair is over a predetermined log-likelihood threshold. 17. A non-transitory computer-readable medium having stored thereon a sequence of instructions that when executed by a system causes the system to perform a method comprising:
accepting a generic language model and a set of documents as inputs; generating a source-specific language model by at least: subdividing each document into meaning units, each meaning unit comprising one or more n-grams, counting the n-grams of each meaning unit up to a predetermined order, and once the n-grams are counted, language-model probabilities are estimated based on the counts and the source-specific language model is obtained; accumulating phrase candidates by creating a set of candidates where each candidate is an n-gram, wherein creating the set of candidates comprises computing a prominence score for each n-gram of each meaning unit, and if the prominence score of a given n-gram is above a prominence score threshold and if the given n-gram is not an unigram, then a stickiness score for the given n-gram is calculated, wherein the prominence score is calculated using both the generic language model and the source-specific language model, and wherein the stickiness score is calculated using the source-specific language model; and filtering the candidate phrases by at least calculating a frequency for each of the candidate phrases, calculating an overall phrase score for each of the candidate phrases, and selecting as significant those phrases of the candidate phrases for which the overall phrase scores are above a threshold phrase score. 18. The non-transitory computer-readable medium of claim 17, wherein the given n-gram is discarded by the processor if the prominence score of the given n-gram is below the prominence score threshold. 19. The non-transitory computer-readable medium of claim 17, wherein the given n-gram is discarded by the processor if the stickiness score of the given n-gram is below a stickiness score threshold. 20. The non-transitory computer-readable medium of claim 17, wherein the predetermined order is three or four. 21. The non-transitory computer-readable medium of claim 17, wherein the source-specific language model is obtained by applying a smoothing technique. 22. The non-transitory computer-readable medium of claim 17, wherein the method further comprises comprising extracting significant phrase co-occurrences from the input set of documents by at least:
iterating over the meaning units and locating occurrences of individual phrases; counting co-occurrences of pairs of phrases in a given meaning unit; computing, based on the count of co-occurrences, a probability of a phrase and a probability of the co-occurrence of a pair of phrases in the given meaning unit; calculating a log-likelihood of the co-occurrence of the pair of phrases using both the probability of the phrase and the probability of the co-occurrence of the pair of phrases; and identifying a significant co-occurrence of the pair of phrases if the log-likelihood of the pair is over a predetermined log-likelihood threshold. | 2,600 |
9,781 | 9,781 | 14,331,239 | 2,626 | The invention relates to a system and a method for haptic interaction with visually presented objects. In the process, three-dimensional data of the user and an object are captured and presented in a visual subsystem. At the same time, there is an interaction of the user with a haptic element in a haptic subsystem, wherein the haptic element is designed in such a way that it can imitate the surface characteristics of the object in the collision area of the hand and object. | 1. System for haptic interaction with visually presented data comprising:
at least one device for capturing three-dimensional data of an object, at least one second device for capturing three-dimensional data of a user, at least one data-processing device for processing the captured three-dimensional data of the object and of the user and for generating a visual representation of the three-dimensional data, at least one visual subsystem with a display device for visual presentation of the three-dimensional data, at least one tactile subsystem with a haptic element for interaction with the user, wherein the haptic element is designed to be at least one tactile display, a vibro-tactile display alone or also in combination with a static display and wherein the visual subsystem is arranged above the tactile subsystem and the distance between the visual subsystem and the tactile subsystem is 50 cm>x>15 cm, preferably 40 cm>x>20 cm, as a special preference 35 cm>x>25 cm. 2. System according to claim 1, characterized in that the first and/or second device for capturing the three-dimensional data is designed to be an imaging device. 3. System according to claim 1, characterized in that the first and/or second device for capturing the three-dimensional data is selected from the group consisting of optical sensors in the IR, VIS and UV range, CCD cameras, CMOS sensors, impedance measurement, sonography, magnetic resonance imaging, scintigraphy, positron emission tomography, single-photon emission computer tomography, thermography, computer tomography, digital volume tomography, endoscopics or optical tomography. 4. System according to claim 1, characterized in that the haptic element has an actuator-pixel matrix made up of actuator pixels. 5. System according to claim 4, characterized in that the actuator pixels are made of polymers whose phase transition behavior is capable of being influenced by environmental quantities. 6. System according to claim 5, characterized in that the actuator pixels are made of hydrogels that are designed to be capable of being influenced in terms of their phase transition behavior via the input of electrical, chemical or thermal energy. 7. System according to claim 1, further comprising a third device for capturing three-dimensional data of the user, wherein the third device is designed to capture eye movements of the user. 8. System according to claim 7, characterized in that the third device is a stationary system selected from the group consisting of a pan-tilt system, a tilting-mirror system or a fixed-camera system. 9. Method for haptic interaction with visually presented data comprising the steps:
capturing three-dimensional data of an object in real time, capturing three-dimensional data of a user in real time, generation of a visual real-time presentation of the three-dimensional data of the object, generation of a visual real-time presentation of at least one body part of the user based on the three-dimensional data of the user, presentation of the three-dimensional data of the object and of the at least one body part of the user in a visual subsystem, wherein the visually presented three-dimensional data of the object is represented with the visually presented three-dimensional data of the user in a display device of the visual subsystem, wherein the interaction of the at least one part of the visually presented user is represented with the visually presented object in the visual subsystem and the simultaneous interaction of the user with a haptic element in a tactile subsystem, wherein a collision point is determined when there is a collision of the at least one part of the visually presented user with the visually presented object, wherein the presentation in the visual subsystem takes place at a distance of 50 cm>x>15 cm, preferably 40 cm>x>20 cm, as a special preference 35 cm>x>25 cm above the tactile subsystem and the three-dimensional data of the object that is captured at the collision point of the at least one part of the visually presented user with the visually presented object is reproduced in the tactile subsystem, wherein the haptic element has a surface with a structure that is designed to reproduce the three-dimensional structure of the object at the collision point based on the three-dimensional data of the object that is captured, at least in the area of the collision point. 10. Method according to claim 9, characterized in that
hand movement of the user is captured in real time when the three-dimensional data of the user is captured, a visual real-time presentation of the hand movement of the user is generated based on the captured hand movement of the user, the visual real-time presentation of the hand movement is represented in the visual subsystem, wherein the interaction of the visually presented real-time representation of the hand movement of the user is represented with the visually presented object in the visual subsystem, so the real-time representation of the hand movement of the user interacts with the visually presented object, wherein the collision point is calculated when there is a collision of the visually presented real-time representation of the hand movement of the user with the visually presented object in the visual subsystem and at least the three-dimensional data of the object captured at the collision point is reproduced in the tactile subsystem. 11. Method according to claim 9 further comprising
capture of eye movements of the user,
processing the captured eye-movement data and determination of an active field of view of the user,
implementation of the captured eye-movement data and of the active field of view in the visual subsystem, wherein the captured eye-movement data of the user provides a local adjustment of the visual presentation of the objects and of the visual presentation of the hand movements in the area of the active field of view. 12. Use of a system according to claim 1 in medical imaging diagnostics, in the construction of vehicles and mechanical engineering, in material testing and in the areas of leisure, sports, entertainment and shopping. | The invention relates to a system and a method for haptic interaction with visually presented objects. In the process, three-dimensional data of the user and an object are captured and presented in a visual subsystem. At the same time, there is an interaction of the user with a haptic element in a haptic subsystem, wherein the haptic element is designed in such a way that it can imitate the surface characteristics of the object in the collision area of the hand and object.1. System for haptic interaction with visually presented data comprising:
at least one device for capturing three-dimensional data of an object, at least one second device for capturing three-dimensional data of a user, at least one data-processing device for processing the captured three-dimensional data of the object and of the user and for generating a visual representation of the three-dimensional data, at least one visual subsystem with a display device for visual presentation of the three-dimensional data, at least one tactile subsystem with a haptic element for interaction with the user, wherein the haptic element is designed to be at least one tactile display, a vibro-tactile display alone or also in combination with a static display and wherein the visual subsystem is arranged above the tactile subsystem and the distance between the visual subsystem and the tactile subsystem is 50 cm>x>15 cm, preferably 40 cm>x>20 cm, as a special preference 35 cm>x>25 cm. 2. System according to claim 1, characterized in that the first and/or second device for capturing the three-dimensional data is designed to be an imaging device. 3. System according to claim 1, characterized in that the first and/or second device for capturing the three-dimensional data is selected from the group consisting of optical sensors in the IR, VIS and UV range, CCD cameras, CMOS sensors, impedance measurement, sonography, magnetic resonance imaging, scintigraphy, positron emission tomography, single-photon emission computer tomography, thermography, computer tomography, digital volume tomography, endoscopics or optical tomography. 4. System according to claim 1, characterized in that the haptic element has an actuator-pixel matrix made up of actuator pixels. 5. System according to claim 4, characterized in that the actuator pixels are made of polymers whose phase transition behavior is capable of being influenced by environmental quantities. 6. System according to claim 5, characterized in that the actuator pixels are made of hydrogels that are designed to be capable of being influenced in terms of their phase transition behavior via the input of electrical, chemical or thermal energy. 7. System according to claim 1, further comprising a third device for capturing three-dimensional data of the user, wherein the third device is designed to capture eye movements of the user. 8. System according to claim 7, characterized in that the third device is a stationary system selected from the group consisting of a pan-tilt system, a tilting-mirror system or a fixed-camera system. 9. Method for haptic interaction with visually presented data comprising the steps:
capturing three-dimensional data of an object in real time, capturing three-dimensional data of a user in real time, generation of a visual real-time presentation of the three-dimensional data of the object, generation of a visual real-time presentation of at least one body part of the user based on the three-dimensional data of the user, presentation of the three-dimensional data of the object and of the at least one body part of the user in a visual subsystem, wherein the visually presented three-dimensional data of the object is represented with the visually presented three-dimensional data of the user in a display device of the visual subsystem, wherein the interaction of the at least one part of the visually presented user is represented with the visually presented object in the visual subsystem and the simultaneous interaction of the user with a haptic element in a tactile subsystem, wherein a collision point is determined when there is a collision of the at least one part of the visually presented user with the visually presented object, wherein the presentation in the visual subsystem takes place at a distance of 50 cm>x>15 cm, preferably 40 cm>x>20 cm, as a special preference 35 cm>x>25 cm above the tactile subsystem and the three-dimensional data of the object that is captured at the collision point of the at least one part of the visually presented user with the visually presented object is reproduced in the tactile subsystem, wherein the haptic element has a surface with a structure that is designed to reproduce the three-dimensional structure of the object at the collision point based on the three-dimensional data of the object that is captured, at least in the area of the collision point. 10. Method according to claim 9, characterized in that
hand movement of the user is captured in real time when the three-dimensional data of the user is captured, a visual real-time presentation of the hand movement of the user is generated based on the captured hand movement of the user, the visual real-time presentation of the hand movement is represented in the visual subsystem, wherein the interaction of the visually presented real-time representation of the hand movement of the user is represented with the visually presented object in the visual subsystem, so the real-time representation of the hand movement of the user interacts with the visually presented object, wherein the collision point is calculated when there is a collision of the visually presented real-time representation of the hand movement of the user with the visually presented object in the visual subsystem and at least the three-dimensional data of the object captured at the collision point is reproduced in the tactile subsystem. 11. Method according to claim 9 further comprising
capture of eye movements of the user,
processing the captured eye-movement data and determination of an active field of view of the user,
implementation of the captured eye-movement data and of the active field of view in the visual subsystem, wherein the captured eye-movement data of the user provides a local adjustment of the visual presentation of the objects and of the visual presentation of the hand movements in the area of the active field of view. 12. Use of a system according to claim 1 in medical imaging diagnostics, in the construction of vehicles and mechanical engineering, in material testing and in the areas of leisure, sports, entertainment and shopping. | 2,600 |
9,782 | 9,782 | 14,276,238 | 2,626 | An integrated gesture sensor module includes an optical sensor die, an application-specific integrated circuit (ASIC) die, and an optical emitter die disposed in a single package. The optical sensor die and ASIC die can be disposed in a first cavity of the package, and the optical emitter die can be disposed in a second cavity of the package. The second cavity can be conical or step-shaped so that the opening defining the cavity increases with distance from the upper surface of the optical emitter die. The upper surface of the optical emitter die may be higher than the upper surface of the optical sensor die. An optical barrier positioned between the first and second cavities can include a portion of a pre-molded, laminate, or ceramic package, molding compound, and/or metallized vias. | 1. A gesture sensor module comprising:
an optical emitter die having an emitter surface; an optical sensor die having a sensor surface facing upwards; and a package housing the LED emitter die and the optical sensor die in separate cavities. 2. The gesture sensor module of claim 1, wherein the sensor surface is higher than the emitter surface. 3. The gesture sensor module of claim 2, wherein the package comprises an optical barrier positioned laterally between the optical emitter die and the optical sensor die, and
wherein the optical barrier, the emitter surface, and the sensor surface are configured such that less than about 5% of light emitted from the emitter surface is reflected from the cover glass to the sensor surface. 4. The gesture sensor module of claim 3, wherein the emitter surface is between 0.25 mm and 0.75 mm farther from a top surface of the package than the sensor surface. 5. The gesture sensor module of claim 1, wherein the package comprises an optical barrier positioned laterally between the optical emitter die and the optical sensor die. 6. The gesture sensor module of claim 1, further comprising a glass cover. 7. The gesture sensor module of claim 1, wherein the package comprises an opening over the emitter surface having a conical shape, such that a width of the opening increases with distance from the emitter surface. 8. The gesture sensor module of claim 7, wherein the surface of the opening is reflective. 9. The gesture sensor module of claim 8, wherein the surface of the opening is coated with a reflective metal layer. 10. The gesture sensor module of claim 7, wherein the opening defines a cone with walls defining an angle with the axis of the emitter of between about 1 and 30 degrees. 11. The gesture sensor module of claim 7, wherein the cone angle is between about 0° and 30°. 12. The gesture sensor module of claim 1, wherein the emitter surface is laterally spaced from the sensor surface by between about 0.25 mm and 3 mm. 13. The gesture sensor module of claim 1, further comprising an ASIC die in electrical communication with and positioned beneath the optical sensor. 14. The gesture sensor module of claim 1, wherein the package comprises at least one of: laminate, ceramic, and pre-molded polymer. 15. The gesture sensor module of claim 1, wherein the total package height is between about 1 and 1.4 mm. 16. A mobile computing device comprising the gesture sensor module of claim 1. 17. An gesture sensor module comprising:
a package comprising first and second cavities an optical emitter die positioned in the first cavity; an optical sensor die positioned in the second cavity; and an optical barrier positioned laterally between the optical emitter die and the optical sensor die. 18. The gesture sensor module of claim 17, wherein the package comprises a pre-molded or ceramic package, and wherein the optical barrier comprises a portion of the pre-molded or ceramic package. 19. The gesture sensor module of claim 17, wherein the optical barrier comprises a molding compound. 20. The gesture sensor module of claim 17, wherein the optical barrier comprises a metallized via in a laminate or pre-molded plastic cover. 21. The gesture sensor module of claim 17, wherein an emitter surface of the optical emitter die is lower than an upper surface of the optical sensor die. 22. A mobile computing device comprising the gesture sensor module of claim 17. 23. A method of manufacturing a gesture sensor module, the method comprising:
providing a package substrate; disposing an optical sensor die on the package substrate; disposing an optical emitter die on the package substrate; and disposing an optical barrier between the optical sensor die and the optical emitter die. 24. The method of claim 23, wherein the package comprises first and second cavities, and wherein the optical sensor die is disposed in the first cavity, and the optical emitter die are disposed in the second cavity. 25. The method of claim 23, wherein disposing the optical sensor die and disposing the optical emitter die comprises arranging the optical sensor such that an upper surface of the optical sensor die is higher than an emitter surface of the optical emitter die. 26. The method of claim 23, further comprising disposing optical encapsulant in at least one of the first and second cavities. 27. The method of claim 23, further comprising disposing a laminate cover over the package substrate. 28. The method of claim 23, wherein disposing the optical sensor die comprises arranging the optical sensor die on top of an ASIC die. | An integrated gesture sensor module includes an optical sensor die, an application-specific integrated circuit (ASIC) die, and an optical emitter die disposed in a single package. The optical sensor die and ASIC die can be disposed in a first cavity of the package, and the optical emitter die can be disposed in a second cavity of the package. The second cavity can be conical or step-shaped so that the opening defining the cavity increases with distance from the upper surface of the optical emitter die. The upper surface of the optical emitter die may be higher than the upper surface of the optical sensor die. An optical barrier positioned between the first and second cavities can include a portion of a pre-molded, laminate, or ceramic package, molding compound, and/or metallized vias.1. A gesture sensor module comprising:
an optical emitter die having an emitter surface; an optical sensor die having a sensor surface facing upwards; and a package housing the LED emitter die and the optical sensor die in separate cavities. 2. The gesture sensor module of claim 1, wherein the sensor surface is higher than the emitter surface. 3. The gesture sensor module of claim 2, wherein the package comprises an optical barrier positioned laterally between the optical emitter die and the optical sensor die, and
wherein the optical barrier, the emitter surface, and the sensor surface are configured such that less than about 5% of light emitted from the emitter surface is reflected from the cover glass to the sensor surface. 4. The gesture sensor module of claim 3, wherein the emitter surface is between 0.25 mm and 0.75 mm farther from a top surface of the package than the sensor surface. 5. The gesture sensor module of claim 1, wherein the package comprises an optical barrier positioned laterally between the optical emitter die and the optical sensor die. 6. The gesture sensor module of claim 1, further comprising a glass cover. 7. The gesture sensor module of claim 1, wherein the package comprises an opening over the emitter surface having a conical shape, such that a width of the opening increases with distance from the emitter surface. 8. The gesture sensor module of claim 7, wherein the surface of the opening is reflective. 9. The gesture sensor module of claim 8, wherein the surface of the opening is coated with a reflective metal layer. 10. The gesture sensor module of claim 7, wherein the opening defines a cone with walls defining an angle with the axis of the emitter of between about 1 and 30 degrees. 11. The gesture sensor module of claim 7, wherein the cone angle is between about 0° and 30°. 12. The gesture sensor module of claim 1, wherein the emitter surface is laterally spaced from the sensor surface by between about 0.25 mm and 3 mm. 13. The gesture sensor module of claim 1, further comprising an ASIC die in electrical communication with and positioned beneath the optical sensor. 14. The gesture sensor module of claim 1, wherein the package comprises at least one of: laminate, ceramic, and pre-molded polymer. 15. The gesture sensor module of claim 1, wherein the total package height is between about 1 and 1.4 mm. 16. A mobile computing device comprising the gesture sensor module of claim 1. 17. An gesture sensor module comprising:
a package comprising first and second cavities an optical emitter die positioned in the first cavity; an optical sensor die positioned in the second cavity; and an optical barrier positioned laterally between the optical emitter die and the optical sensor die. 18. The gesture sensor module of claim 17, wherein the package comprises a pre-molded or ceramic package, and wherein the optical barrier comprises a portion of the pre-molded or ceramic package. 19. The gesture sensor module of claim 17, wherein the optical barrier comprises a molding compound. 20. The gesture sensor module of claim 17, wherein the optical barrier comprises a metallized via in a laminate or pre-molded plastic cover. 21. The gesture sensor module of claim 17, wherein an emitter surface of the optical emitter die is lower than an upper surface of the optical sensor die. 22. A mobile computing device comprising the gesture sensor module of claim 17. 23. A method of manufacturing a gesture sensor module, the method comprising:
providing a package substrate; disposing an optical sensor die on the package substrate; disposing an optical emitter die on the package substrate; and disposing an optical barrier between the optical sensor die and the optical emitter die. 24. The method of claim 23, wherein the package comprises first and second cavities, and wherein the optical sensor die is disposed in the first cavity, and the optical emitter die are disposed in the second cavity. 25. The method of claim 23, wherein disposing the optical sensor die and disposing the optical emitter die comprises arranging the optical sensor such that an upper surface of the optical sensor die is higher than an emitter surface of the optical emitter die. 26. The method of claim 23, further comprising disposing optical encapsulant in at least one of the first and second cavities. 27. The method of claim 23, further comprising disposing a laminate cover over the package substrate. 28. The method of claim 23, wherein disposing the optical sensor die comprises arranging the optical sensor die on top of an ASIC die. | 2,600 |
9,783 | 9,783 | 14,911,126 | 2,698 | An appropriate user interface corresponding to a use state of an apparatus is provided.
An imaging system is an imaging system which includes an imaging apparatus and an information processing apparatus. The imaging apparatus is an imaging apparatus in which a control related to an imaging operation is performed based on an operation input performed in the information processing apparatus by connecting to the information processing apparatus by using wireless communication. The information processing apparatus is an information processing apparatus which performs a control for switching a display state of a display screen for operating the imaging apparatus, based on a relative position relationship with the imaging apparatus. | 1. An information processing apparatus, comprising:
a control unit which performs a control for switching a display state of a display screen for operating an imaging apparatus based on a relative position relationship with the imaging apparatus. 2. The information processing apparatus according to claim 1,
wherein the control unit performs a control for switching a display state of the display screen based on a distance between the information processing apparatus and the imaging apparatus. 3. The information processing apparatus according to claim 1,
wherein the control unit performs a control for switching a display state of the display screen based on whether or not the imaging apparatus is mounted on the information processing apparatus. 4. The information processing apparatus according to claim 3,
wherein the control unit performs a control for switching a display state of the display screen based on whether or not the imaging apparatus is mounted on a display surface of the information processing apparatus. 5. The information processing apparatus according to claim 4,
wherein, in the case where the imaging apparatus is mounted on a display surface of the information processing apparatus, the control unit performs a control for switching a display state of the display screen based on a position of the imaging apparatus on a display surface of the information processing apparatus. 6. The information processing apparatus according to claim 1,
wherein the control unit causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and performs a control for changing a display state of the operation object based on the relative position relationship. 7. The information processing apparatus according to claim 1,
wherein the control unit causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and performs a control, in the case where the imaging apparatus is not mounted on the information processing apparatus, for changing a display state of the operation object based on a change in a posture of the information processing apparatus, and in the case where the imaging apparatus is mounted on the information processing apparatus, for not changing a display state of the operation object based on a change in a posture of the information processing apparatus. 8. An imaging apparatus, comprising:
a control unit which performs a control related to an imaging operation based on an operation input performed in the information processing apparatus in which a display screen is displayed for a display state to be switched based on a relative position relationship of the imaging apparatus and the information processing apparatus. 9. The imaging apparatus according to claim 8,
wherein a display state of the display screen is switched based on a distance between the information processing apparatus and the imaging apparatus. 10. The imaging apparatus according to claim 8,
wherein a display state of the display screen is switched based on whether or not the imaging apparatus is mounted on the information processing apparatus. 11. The imaging apparatus according to claim 10,
wherein a display state of the display screen is switched based on whether or not the imaging apparatus is mounted on a display surface of the information processing apparatus. 12. The imaging apparatus according to claim 11,
wherein, in the case where the imaging apparatus is mounted on a display surface of the information processing apparatus, a display state of the display screen is switched based on a position of the imaging apparatus on a display surface of the information processing apparatus. 13. The imaging apparatus according to claim 8,
wherein the information processing apparatus causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and changes a display state of the operation object based on the relative position relationship. 14. The imaging apparatus according to claim 8,
wherein the information processing apparatus causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and in the case where the imaging apparatus is not mounted on the information processing apparatus, changes a display state of the operation object based on a change in a posture of the information processing apparatus, and in the case where the imaging apparatus is mounted on the information processing apparatus, does not change a display state of the operation object based on a change in a posture of the information processing apparatus. 15. An imaging system, comprising:
an imaging apparatus in which a control related to an imaging operation is performed based on an operation input performed in an information processing apparatus by connecting to the information processing apparatus by using wireless communication; and an information processing apparatus which performs a control for switching a display state of a display screen for operating the imaging apparatus based on a relative position relationship with the imaging apparatus. 16. A control method of an information processing apparatus which performs a control for switching a display state of a display screen for operating an imaging apparatus based on a relative position relationship with the imaging apparatus. 17. A control method of an imaging apparatus which performs a control related to an imaging operation based on an operation input performed in an information processing apparatus in which a display screen is displayed for a display state to be switched based on a relative position relationship of the imaging apparatus and the information processing apparatus. 18. A program for causing a computer to execute a control for switching a display state of a display screen for operating an imaging apparatus based on a relative position relationship with the imaging apparatus. 19. A program for causing a computer to execute a control related to an imaging operation based on an operation input performed in an information processing apparatus in which a display screen is displayed for a display state to be switched based on a relative position relationship of an imaging apparatus and the information processing apparatus. | An appropriate user interface corresponding to a use state of an apparatus is provided.
An imaging system is an imaging system which includes an imaging apparatus and an information processing apparatus. The imaging apparatus is an imaging apparatus in which a control related to an imaging operation is performed based on an operation input performed in the information processing apparatus by connecting to the information processing apparatus by using wireless communication. The information processing apparatus is an information processing apparatus which performs a control for switching a display state of a display screen for operating the imaging apparatus, based on a relative position relationship with the imaging apparatus.1. An information processing apparatus, comprising:
a control unit which performs a control for switching a display state of a display screen for operating an imaging apparatus based on a relative position relationship with the imaging apparatus. 2. The information processing apparatus according to claim 1,
wherein the control unit performs a control for switching a display state of the display screen based on a distance between the information processing apparatus and the imaging apparatus. 3. The information processing apparatus according to claim 1,
wherein the control unit performs a control for switching a display state of the display screen based on whether or not the imaging apparatus is mounted on the information processing apparatus. 4. The information processing apparatus according to claim 3,
wherein the control unit performs a control for switching a display state of the display screen based on whether or not the imaging apparatus is mounted on a display surface of the information processing apparatus. 5. The information processing apparatus according to claim 4,
wherein, in the case where the imaging apparatus is mounted on a display surface of the information processing apparatus, the control unit performs a control for switching a display state of the display screen based on a position of the imaging apparatus on a display surface of the information processing apparatus. 6. The information processing apparatus according to claim 1,
wherein the control unit causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and performs a control for changing a display state of the operation object based on the relative position relationship. 7. The information processing apparatus according to claim 1,
wherein the control unit causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and performs a control, in the case where the imaging apparatus is not mounted on the information processing apparatus, for changing a display state of the operation object based on a change in a posture of the information processing apparatus, and in the case where the imaging apparatus is mounted on the information processing apparatus, for not changing a display state of the operation object based on a change in a posture of the information processing apparatus. 8. An imaging apparatus, comprising:
a control unit which performs a control related to an imaging operation based on an operation input performed in the information processing apparatus in which a display screen is displayed for a display state to be switched based on a relative position relationship of the imaging apparatus and the information processing apparatus. 9. The imaging apparatus according to claim 8,
wherein a display state of the display screen is switched based on a distance between the information processing apparatus and the imaging apparatus. 10. The imaging apparatus according to claim 8,
wherein a display state of the display screen is switched based on whether or not the imaging apparatus is mounted on the information processing apparatus. 11. The imaging apparatus according to claim 10,
wherein a display state of the display screen is switched based on whether or not the imaging apparatus is mounted on a display surface of the information processing apparatus. 12. The imaging apparatus according to claim 11,
wherein, in the case where the imaging apparatus is mounted on a display surface of the information processing apparatus, a display state of the display screen is switched based on a position of the imaging apparatus on a display surface of the information processing apparatus. 13. The imaging apparatus according to claim 8,
wherein the information processing apparatus causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and changes a display state of the operation object based on the relative position relationship. 14. The imaging apparatus according to claim 8,
wherein the information processing apparatus causes the display screen which includes an operation object for operating the imaging apparatus to be displayed, and in the case where the imaging apparatus is not mounted on the information processing apparatus, changes a display state of the operation object based on a change in a posture of the information processing apparatus, and in the case where the imaging apparatus is mounted on the information processing apparatus, does not change a display state of the operation object based on a change in a posture of the information processing apparatus. 15. An imaging system, comprising:
an imaging apparatus in which a control related to an imaging operation is performed based on an operation input performed in an information processing apparatus by connecting to the information processing apparatus by using wireless communication; and an information processing apparatus which performs a control for switching a display state of a display screen for operating the imaging apparatus based on a relative position relationship with the imaging apparatus. 16. A control method of an information processing apparatus which performs a control for switching a display state of a display screen for operating an imaging apparatus based on a relative position relationship with the imaging apparatus. 17. A control method of an imaging apparatus which performs a control related to an imaging operation based on an operation input performed in an information processing apparatus in which a display screen is displayed for a display state to be switched based on a relative position relationship of the imaging apparatus and the information processing apparatus. 18. A program for causing a computer to execute a control for switching a display state of a display screen for operating an imaging apparatus based on a relative position relationship with the imaging apparatus. 19. A program for causing a computer to execute a control related to an imaging operation based on an operation input performed in an information processing apparatus in which a display screen is displayed for a display state to be switched based on a relative position relationship of an imaging apparatus and the information processing apparatus. | 2,600 |
9,784 | 9,784 | 14,406,796 | 2,646 | Wireless connectors and communication systems are described including a first communication device configured to emit a modulated signal, a second communication device configured to receive the emitted modulated signal and a waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device. In some embodiments, the telescopic waveguide includes a plurality of guiding sections, each guiding section being configured to slide within or over an adjacent guiding section inwardly to reduce a length of the telescopic waveguide and outwardly to increase the length of the telescopic waveguide. | 1. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a telescopic waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the telescopic waveguide, guide the received signal from the first end to an opposite second end of the telescopic waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the telescopic waveguide being centered on an axis and comprising a plurality of guiding sections, each guiding section being centered on the axis and configured to slide within or over an adjacent guiding section inwardly to reduce a length of the telescopic waveguide and outwardly to increase the length of the telescopic waveguide. 2. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a telescopic waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the telescopic waveguide, guide the received signal from the first end to an opposite second end of the telescopic waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the telescopic waveguide comprising a plurality of guiding sections, each guiding section being configured to slide within or over an adjacent guiding section inwardly to reduce a length of the telescopic waveguide and outwardly to increase the length of the telescopic waveguide, wherein at least one guiding section defines a cavity along a length of the guiding section. 3-10. (canceled) 11. The wireless connector of claim 1, wherein the waveguide is tubular and each guiding section is tubular. 12. The wireless connector of claim 11, wherein the cavity of the waveguide is configured to guide the received signal from the first end to an opposite second end of the waveguide. 13. The wireless connector of claim 1, wherein the first and second communication devices are disposed in a housing, wherein the housing has a dimension configured to change. 14. The wireless connector of claim 1, wherein the first communication device is disposed within and stationary relative to a housing and the second communication device is configured to slide into or out of the housing. 15. The wireless connector of claim 1, wherein the first and second communication devices are coupled through at least one wired connection. 16. The wireless connector of claim 1, wherein the first communication device includes at least one first antenna configured to emit the modulated signal and the second communication device includes at least one second antenna configured to receive the emitted modulated signal. 17. The wireless connector of claim 1, wherein at least one guiding section in the plurality of guiding sections of the waveguide comprises a solid dielectric core surrounded by an electrically conductive cladding. 18. The wireless connector of claim 1, wherein the waveguide becomes increasingly wide in at least one dimension approaching at least one end of the telescopic waveguide. 19. The wireless connector of claim 1, wherein the plurality of guiding sections of the waveguide comprises a first guiding section and an adjacent second guiding section being configured to slide inwardly and outwardly within the first guiding section, the second guiding section having a first end disposed within the first guiding section, the second guiding section becoming increasingly wide in at least one dimension approaching the first end of the second guiding section. 20. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a waveguide centered on an axis and disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the waveguide comprising a first guiding section and a second guiding section, each of the first and second guiding sections being centered on the axis, a first end of the first guiding section comprising a ball portion, a second end of the second guiding section comprising a socket portion, the ball portion of the first guiding section being disposed within the socket portion of the second guiding portion and free to move within the socket portion in a plurality of directions. 21. The wireless connector of claim 20, wherein the second guiding section is disposed between the first guiding section and a third guiding section, the second guiding sections being configured to slide within or over the third guiding section inwardly to reduce a length of the waveguide and outwardly to increase the length of the waveguide. 22. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a waveguide centered on an axis and disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the waveguide comprising a plurality of guiding sections, each guiding section in the plurality of guiding sections being centered on the axis, at least one guiding section in the plurality of guiding section being rigid, at least one guiding section in the plurality of guiding sections being more flexible than another guiding section. 23. The wireless connector of claim 22, wherein at least one guiding section in the plurality of guiding sections is configured to slide within or over an adjacent guiding section in the plurality of guiding sections inwardly to reduce a length of the waveguide and outwardly to increase the length of the waveguide. 24. A wireless communication system comprising:
a plurality of first communication devices disposed on a common first substrate, each first communication device being configured to emit a modulated signal; and a plurality of waveguides, each waveguide being associated with a different first communication device and configured to wirelessly receive the modulated signal emitted by the associated first communication device from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end of the waveguide, at least one waveguide in the plurality of waveguides comprising a first slot at the first end of the waveguide, a portion of the first substrate being inserted into the first slot; wherein the waveguides each define a cavity along a length of the waveguide. 25. The wireless communication system of claim 24, wherein each waveguide in the plurality of waveguides comprises a first slot at the first end of the waveguide, a portion of the first substrate being inserted into each first slot. 26. The wireless communication system of claim 24 further comprising a plurality of second communication devices disposed on a common second substrate, each second communication device being associated with a different first communication device and configured to receive the modulated signal emitted by the first communication device, each waveguide in the plurality of waveguides being disposed between associated first and second communication devices and configured to wirelessly receive the modulated signal emitted by the first communication device from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device. 27. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the telescopic waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the waveguide having a non-uniform permittivity along at least a portion of a length of the waveguide. 28. The wireless connector of claim 27, wherein the second communication device is disposed between the first and second ends of the waveguide adjacent to a side of the waveguide, the waveguide being configured to wirelessly transmit the modulated signal from the side of the waveguide to the second communication device. 29. The wireless connector of claim 27, wherein each of the first and second communication devices comprises a transceiver. 30. The wireless connector of claim 27, wherein the waveguide comprises a core of a first dielectric material and the waveguide becomes increasingly narrow in at least one dimension approaching at least one end of the telescopic waveguide. | Wireless connectors and communication systems are described including a first communication device configured to emit a modulated signal, a second communication device configured to receive the emitted modulated signal and a waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device. In some embodiments, the telescopic waveguide includes a plurality of guiding sections, each guiding section being configured to slide within or over an adjacent guiding section inwardly to reduce a length of the telescopic waveguide and outwardly to increase the length of the telescopic waveguide.1. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a telescopic waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the telescopic waveguide, guide the received signal from the first end to an opposite second end of the telescopic waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the telescopic waveguide being centered on an axis and comprising a plurality of guiding sections, each guiding section being centered on the axis and configured to slide within or over an adjacent guiding section inwardly to reduce a length of the telescopic waveguide and outwardly to increase the length of the telescopic waveguide. 2. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a telescopic waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the telescopic waveguide, guide the received signal from the first end to an opposite second end of the telescopic waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the telescopic waveguide comprising a plurality of guiding sections, each guiding section being configured to slide within or over an adjacent guiding section inwardly to reduce a length of the telescopic waveguide and outwardly to increase the length of the telescopic waveguide, wherein at least one guiding section defines a cavity along a length of the guiding section. 3-10. (canceled) 11. The wireless connector of claim 1, wherein the waveguide is tubular and each guiding section is tubular. 12. The wireless connector of claim 11, wherein the cavity of the waveguide is configured to guide the received signal from the first end to an opposite second end of the waveguide. 13. The wireless connector of claim 1, wherein the first and second communication devices are disposed in a housing, wherein the housing has a dimension configured to change. 14. The wireless connector of claim 1, wherein the first communication device is disposed within and stationary relative to a housing and the second communication device is configured to slide into or out of the housing. 15. The wireless connector of claim 1, wherein the first and second communication devices are coupled through at least one wired connection. 16. The wireless connector of claim 1, wherein the first communication device includes at least one first antenna configured to emit the modulated signal and the second communication device includes at least one second antenna configured to receive the emitted modulated signal. 17. The wireless connector of claim 1, wherein at least one guiding section in the plurality of guiding sections of the waveguide comprises a solid dielectric core surrounded by an electrically conductive cladding. 18. The wireless connector of claim 1, wherein the waveguide becomes increasingly wide in at least one dimension approaching at least one end of the telescopic waveguide. 19. The wireless connector of claim 1, wherein the plurality of guiding sections of the waveguide comprises a first guiding section and an adjacent second guiding section being configured to slide inwardly and outwardly within the first guiding section, the second guiding section having a first end disposed within the first guiding section, the second guiding section becoming increasingly wide in at least one dimension approaching the first end of the second guiding section. 20. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a waveguide centered on an axis and disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the waveguide comprising a first guiding section and a second guiding section, each of the first and second guiding sections being centered on the axis, a first end of the first guiding section comprising a ball portion, a second end of the second guiding section comprising a socket portion, the ball portion of the first guiding section being disposed within the socket portion of the second guiding portion and free to move within the socket portion in a plurality of directions. 21. The wireless connector of claim 20, wherein the second guiding section is disposed between the first guiding section and a third guiding section, the second guiding sections being configured to slide within or over the third guiding section inwardly to reduce a length of the waveguide and outwardly to increase the length of the waveguide. 22. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a waveguide centered on an axis and disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the waveguide comprising a plurality of guiding sections, each guiding section in the plurality of guiding sections being centered on the axis, at least one guiding section in the plurality of guiding section being rigid, at least one guiding section in the plurality of guiding sections being more flexible than another guiding section. 23. The wireless connector of claim 22, wherein at least one guiding section in the plurality of guiding sections is configured to slide within or over an adjacent guiding section in the plurality of guiding sections inwardly to reduce a length of the waveguide and outwardly to increase the length of the waveguide. 24. A wireless communication system comprising:
a plurality of first communication devices disposed on a common first substrate, each first communication device being configured to emit a modulated signal; and a plurality of waveguides, each waveguide being associated with a different first communication device and configured to wirelessly receive the modulated signal emitted by the associated first communication device from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end of the waveguide, at least one waveguide in the plurality of waveguides comprising a first slot at the first end of the waveguide, a portion of the first substrate being inserted into the first slot; wherein the waveguides each define a cavity along a length of the waveguide. 25. The wireless communication system of claim 24, wherein each waveguide in the plurality of waveguides comprises a first slot at the first end of the waveguide, a portion of the first substrate being inserted into each first slot. 26. The wireless communication system of claim 24 further comprising a plurality of second communication devices disposed on a common second substrate, each second communication device being associated with a different first communication device and configured to receive the modulated signal emitted by the first communication device, each waveguide in the plurality of waveguides being disposed between associated first and second communication devices and configured to wirelessly receive the modulated signal emitted by the first communication device from a first end of the waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device. 27. A wireless connector comprising:
a first communication device configured to emit a modulated signal; a second communication device configured to receive the emitted modulated signal; and a waveguide disposed between the first and second communication devices and configured to wirelessly receive the emitted modulated signal from a first end of the telescopic waveguide, guide the received signal from the first end to an opposite second end of the waveguide, and wirelessly transmit the guided signal from the second end to the second communication device, the waveguide having a non-uniform permittivity along at least a portion of a length of the waveguide. 28. The wireless connector of claim 27, wherein the second communication device is disposed between the first and second ends of the waveguide adjacent to a side of the waveguide, the waveguide being configured to wirelessly transmit the modulated signal from the side of the waveguide to the second communication device. 29. The wireless connector of claim 27, wherein each of the first and second communication devices comprises a transceiver. 30. The wireless connector of claim 27, wherein the waveguide comprises a core of a first dielectric material and the waveguide becomes increasingly narrow in at least one dimension approaching at least one end of the telescopic waveguide. | 2,600 |
9,785 | 9,785 | 15,273,115 | 2,645 | In an embodiment, an apparatus measures, via a first directional receive antenna array and a second directional receive antenna array that are each coupled to an apparatus, one or more signals that are transmitted by one or more transmitters of the UE. The first and second directional receive antenna arrays are oriented at different directions. The apparatus determines first and second representative values for the first and second directional receive antenna arrays, respectively based on some or all of the measurements. The apparatus determines whether the UE is within a given region based on the first and second representative values. | 1. A method of determining a region of a user equipment (UE), comprising:
measuring, via a first directional receive antenna array coupled to an apparatus, one or more signals that are transmitted by one or more transmitters of the UE; measuring, via a second directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters, wherein the first and second directional receive antenna arrays are oriented towards different directions; determining a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array; determining a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and determining whether the UE is within a given region based on the first and second representative values, wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 2. The method of claim 1, wherein the first and second directional receive antenna arrays are BLUETOOTH antenna arrays. 3. The method of claim 1, further comprising:
blocking, permitting or performing one or more operations based on whether the UE is determined to be within the given region. 4. The method of claim 1,
wherein the first and second representative values are based on signal strength measurements of the one or more signals by the first and second directional receive antenna arrays, respectively, and wherein the determining whether the UE is within the given region determines the UE to be inside the given region in response to the first representative value being greater than the second representative value, or wherein the determining whether the UE is within the given region determines the UE to be outside the given region in response to the second representative value being greater than the first representative value. 5. (canceled) 6. The method of claim 1, wherein the enclosed environment is a vehicle and the determining whether the UE is within the given region comprises determining whether the UE is inside or outside of the vehicle. 7. The method of claim 1, wherein the first and second directional receive antenna arrays include substantially non-overlapping antenna patterns. 8. The method of claim 1,
wherein the first directional receive antenna array and a third directional receive antenna array include substantially non-overlapping antenna patterns, and wherein the substantially non-overlapping antenna patterns cover different regions of the enclosed environment. 9. The method of claim 8,
wherein the first and third directional receive antenna arrays are connected to a radio frequency (RF) switch that is in turn connected to a radio, and wherein the RF switch is configured to tune to one of the first and third directional receive antenna arrays to facilitate the measurements of the one or more signals by the first and third directional receive antenna arrays. 10. The method of claim 8, wherein the determining whether the UE is within the given region determines whether a current region of the UE is inside of the enclosed environment. 11. The method of claim 1, further comprising:
measuring, via at least one additional directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters; determining at least one additional representative value for the at least one additional directional receive antenna array based on some or all of the measurements of the one or more signals by the at least one additional directional receive antenna array, wherein the determining whether the UE is within the given region is based on two or more of the first, second and at least one additional representative values. 12. The method of claim 11,
wherein the at least one additional directional receive antenna array includes multiple directional receive antenna arrays that are deployed throughout a perimeter of a vehicle, and wherein the determining determines whether the UE is within an interior of the vehicle, an exterior of the vehicle, or a particular portion of the interior or exterior of the vehicle. 13. The method of claim 1, wherein the determining of the first representative value includes:
obtaining a first vertically polarized signal measurement and a first horizontally polarized signal measurement for each of the one or more signals via the first directional receive antenna, wherein each of the measurements of the one or more signals by the first directional receive antenna array that is used to determine the first representative value corresponds to a larger of the first vertically polarized signal measurement and the first horizontally polarized signal measurement or an average of the first vertically polarized signal measurement and the first horizontally polarized signal measurement. 14. The method of claim 13, wherein the determining of the second representative value includes:
obtaining a second vertically polarized signal measurement and a second horizontally polarized signal measurement for each of the one or more signals via the second directional receive antenna, wherein each of the measurements of the one or more signals by the second directional receive antenna array that is used to determine the second representative value corresponds to a larger of the second vertically polarized signal measurement and the second horizontally polarized signal measurement or an average of the second vertically polarized signal measurement and the second horizontally polarized signal measurement. 15. The method of claim 1, wherein the one or more signals comprise a plurality of signals over a plurality of frequencies, and wherein the determining of the first representative value includes averaging some or all of the measurements of the plurality of signals by the first directional receive antenna over different frequencies to achieve frequency diversity. 16. The method of claim 1,
wherein a first antenna pattern of the first directional receive antenna array is defined based on beam-forming techniques to have a first degree of spatial coverage, wherein a second antenna pattern of the second directional receive antenna array is defined based on beam-forming techniques to have a second degree of spatial coverage, and wherein the given region is defined in part by the first and second degrees of spatial coverage. 17. The method of claim 16,
wherein the first and/or second degrees of spatial coverage correspond to 90 degrees, or wherein the first and/or second degrees of spatial coverage correspond to 180 degrees. 18. The method of claim 1, further comprising:
receiving information characterizing a polarization at which the one or more signals are transmitted by the UE, wherein the first and second representative values are determined based on the received polarization information. 19. An apparatus configured to determine a region of a user equipment (UE), comprising:
a first directional receive antenna array capable of measuring one or more signals transmitted by one or more transmitters of the UE; a second directional receive antenna array capable of measuring the one or more signals transmitted by the one or more transmitters of the UE, wherein the first and second directional receive antenna arrays are oriented towards different directions; a communications interface coupled to the first directional receive antenna array and the second directional receive antenna array; and a processor coupled to the communications interface and configured to:
measure, via the first directional receive antenna array, one or more signals that are transmitted by one or more transmitters of the UE;
measure, via the second directional receive antenna array, the one or more signals that are transmitted by the one or more transmitters;
determine a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array;
determine a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and
determine whether the UE is within a given region based on the first and second representative values,
wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 20. The apparatus of claim 19, wherein the processor is further configured to block, permit or perform one or more operations based on whether the UE is determined to be within the given region. 21. The apparatus of claim 19,
wherein the first directional receive antenna array and a third directional receive antenna array are each oriented towards different regions of the enclosed environment. 22. The apparatus of claim 19, further comprising at least one additional directional receive antenna array coupled to the apparatus capable of measuring the one or more signals transmitted by the one or more transmitters of the UE, wherein the processor is further configured to:
measure, via the at least one additional directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters; determine at least one additional representative value for the at least one additional directional receive antenna array based on some or all of the measurements of the one or more signals by the at least one additional directional receive antenna array, wherein the processor is configured to determine whether the UE is within the given region is based on two or more of the first, second and at least one additional representative values. 23. An apparatus configured to determine a region of a user equipment (UE), comprising:
means for measuring, via a first directional receive antenna array coupled to the apparatus, one or more signals that are transmitted by one or more transmitters of the UE; means for measuring, via a second directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters, wherein the first and second directional receive antenna arrays are oriented towards different directions; means for determining a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array; means for determining a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and means for determining whether the UE is within a given region based on the first and second representative values, wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 24. The apparatus of claim 23, wherein the first and second directional receive antenna arrays are BLUETOOTH antenna arrays. 25. The apparatus of claim 23, further comprising:
means for blocking, permitting or performing one or more operations based on whether the UE is determined to be within the given region. 26. (canceled) 27. The apparatus of claim 23,
wherein the first directional receive antenna array and a third directional receive antenna array are each oriented towards different regions of the enclosed environment. 28. A non-transitory computer-readable medium containing instructions stored thereon, which, when executed by an apparatus configured to determine a region of a user equipment (UE), cause the apparatus to perform operations, the instructions comprising:
at least one instruction configured to cause the apparatus to measure, via a first directional receive antenna array coupled to the apparatus, one or more signals that are transmitted by one or more transmitters of the UE; at least one instruction configured to cause the apparatus to measure, via a second directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters, wherein the first and second directional receive antenna arrays are towards different directions; at least one instruction configured to cause the apparatus to determine a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array; at least one instruction configured to cause the apparatus to determine a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and at least one instruction configured to cause the apparatus to determine whether the UE is within a given region based on the first and second representative values, wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 29. The non-transitory computer-readable medium of claim 28, further comprising:
at least one instruction configured to cause the apparatus to block, permit or perform one or more operations based on whether the UE is determined to be within the given region. 30. The non-transitory computer-readable medium of claim 28,
wherein the first and second directional receive antenna arrays are each oriented towards different regions of an enclosed environment, the given region including a portion of the enclosed environment, or wherein one of the first and second directional receive antenna arrays is oriented towards interior region of the enclosed environment and the other of the first and second directional receive antenna arrays is oriented towards an exterior region of the enclosed environment, the given region including the enclosed environment. | In an embodiment, an apparatus measures, via a first directional receive antenna array and a second directional receive antenna array that are each coupled to an apparatus, one or more signals that are transmitted by one or more transmitters of the UE. The first and second directional receive antenna arrays are oriented at different directions. The apparatus determines first and second representative values for the first and second directional receive antenna arrays, respectively based on some or all of the measurements. The apparatus determines whether the UE is within a given region based on the first and second representative values.1. A method of determining a region of a user equipment (UE), comprising:
measuring, via a first directional receive antenna array coupled to an apparatus, one or more signals that are transmitted by one or more transmitters of the UE; measuring, via a second directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters, wherein the first and second directional receive antenna arrays are oriented towards different directions; determining a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array; determining a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and determining whether the UE is within a given region based on the first and second representative values, wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 2. The method of claim 1, wherein the first and second directional receive antenna arrays are BLUETOOTH antenna arrays. 3. The method of claim 1, further comprising:
blocking, permitting or performing one or more operations based on whether the UE is determined to be within the given region. 4. The method of claim 1,
wherein the first and second representative values are based on signal strength measurements of the one or more signals by the first and second directional receive antenna arrays, respectively, and wherein the determining whether the UE is within the given region determines the UE to be inside the given region in response to the first representative value being greater than the second representative value, or wherein the determining whether the UE is within the given region determines the UE to be outside the given region in response to the second representative value being greater than the first representative value. 5. (canceled) 6. The method of claim 1, wherein the enclosed environment is a vehicle and the determining whether the UE is within the given region comprises determining whether the UE is inside or outside of the vehicle. 7. The method of claim 1, wherein the first and second directional receive antenna arrays include substantially non-overlapping antenna patterns. 8. The method of claim 1,
wherein the first directional receive antenna array and a third directional receive antenna array include substantially non-overlapping antenna patterns, and wherein the substantially non-overlapping antenna patterns cover different regions of the enclosed environment. 9. The method of claim 8,
wherein the first and third directional receive antenna arrays are connected to a radio frequency (RF) switch that is in turn connected to a radio, and wherein the RF switch is configured to tune to one of the first and third directional receive antenna arrays to facilitate the measurements of the one or more signals by the first and third directional receive antenna arrays. 10. The method of claim 8, wherein the determining whether the UE is within the given region determines whether a current region of the UE is inside of the enclosed environment. 11. The method of claim 1, further comprising:
measuring, via at least one additional directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters; determining at least one additional representative value for the at least one additional directional receive antenna array based on some or all of the measurements of the one or more signals by the at least one additional directional receive antenna array, wherein the determining whether the UE is within the given region is based on two or more of the first, second and at least one additional representative values. 12. The method of claim 11,
wherein the at least one additional directional receive antenna array includes multiple directional receive antenna arrays that are deployed throughout a perimeter of a vehicle, and wherein the determining determines whether the UE is within an interior of the vehicle, an exterior of the vehicle, or a particular portion of the interior or exterior of the vehicle. 13. The method of claim 1, wherein the determining of the first representative value includes:
obtaining a first vertically polarized signal measurement and a first horizontally polarized signal measurement for each of the one or more signals via the first directional receive antenna, wherein each of the measurements of the one or more signals by the first directional receive antenna array that is used to determine the first representative value corresponds to a larger of the first vertically polarized signal measurement and the first horizontally polarized signal measurement or an average of the first vertically polarized signal measurement and the first horizontally polarized signal measurement. 14. The method of claim 13, wherein the determining of the second representative value includes:
obtaining a second vertically polarized signal measurement and a second horizontally polarized signal measurement for each of the one or more signals via the second directional receive antenna, wherein each of the measurements of the one or more signals by the second directional receive antenna array that is used to determine the second representative value corresponds to a larger of the second vertically polarized signal measurement and the second horizontally polarized signal measurement or an average of the second vertically polarized signal measurement and the second horizontally polarized signal measurement. 15. The method of claim 1, wherein the one or more signals comprise a plurality of signals over a plurality of frequencies, and wherein the determining of the first representative value includes averaging some or all of the measurements of the plurality of signals by the first directional receive antenna over different frequencies to achieve frequency diversity. 16. The method of claim 1,
wherein a first antenna pattern of the first directional receive antenna array is defined based on beam-forming techniques to have a first degree of spatial coverage, wherein a second antenna pattern of the second directional receive antenna array is defined based on beam-forming techniques to have a second degree of spatial coverage, and wherein the given region is defined in part by the first and second degrees of spatial coverage. 17. The method of claim 16,
wherein the first and/or second degrees of spatial coverage correspond to 90 degrees, or wherein the first and/or second degrees of spatial coverage correspond to 180 degrees. 18. The method of claim 1, further comprising:
receiving information characterizing a polarization at which the one or more signals are transmitted by the UE, wherein the first and second representative values are determined based on the received polarization information. 19. An apparatus configured to determine a region of a user equipment (UE), comprising:
a first directional receive antenna array capable of measuring one or more signals transmitted by one or more transmitters of the UE; a second directional receive antenna array capable of measuring the one or more signals transmitted by the one or more transmitters of the UE, wherein the first and second directional receive antenna arrays are oriented towards different directions; a communications interface coupled to the first directional receive antenna array and the second directional receive antenna array; and a processor coupled to the communications interface and configured to:
measure, via the first directional receive antenna array, one or more signals that are transmitted by one or more transmitters of the UE;
measure, via the second directional receive antenna array, the one or more signals that are transmitted by the one or more transmitters;
determine a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array;
determine a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and
determine whether the UE is within a given region based on the first and second representative values,
wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 20. The apparatus of claim 19, wherein the processor is further configured to block, permit or perform one or more operations based on whether the UE is determined to be within the given region. 21. The apparatus of claim 19,
wherein the first directional receive antenna array and a third directional receive antenna array are each oriented towards different regions of the enclosed environment. 22. The apparatus of claim 19, further comprising at least one additional directional receive antenna array coupled to the apparatus capable of measuring the one or more signals transmitted by the one or more transmitters of the UE, wherein the processor is further configured to:
measure, via the at least one additional directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters; determine at least one additional representative value for the at least one additional directional receive antenna array based on some or all of the measurements of the one or more signals by the at least one additional directional receive antenna array, wherein the processor is configured to determine whether the UE is within the given region is based on two or more of the first, second and at least one additional representative values. 23. An apparatus configured to determine a region of a user equipment (UE), comprising:
means for measuring, via a first directional receive antenna array coupled to the apparatus, one or more signals that are transmitted by one or more transmitters of the UE; means for measuring, via a second directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters, wherein the first and second directional receive antenna arrays are oriented towards different directions; means for determining a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array; means for determining a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and means for determining whether the UE is within a given region based on the first and second representative values, wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 24. The apparatus of claim 23, wherein the first and second directional receive antenna arrays are BLUETOOTH antenna arrays. 25. The apparatus of claim 23, further comprising:
means for blocking, permitting or performing one or more operations based on whether the UE is determined to be within the given region. 26. (canceled) 27. The apparatus of claim 23,
wherein the first directional receive antenna array and a third directional receive antenna array are each oriented towards different regions of the enclosed environment. 28. A non-transitory computer-readable medium containing instructions stored thereon, which, when executed by an apparatus configured to determine a region of a user equipment (UE), cause the apparatus to perform operations, the instructions comprising:
at least one instruction configured to cause the apparatus to measure, via a first directional receive antenna array coupled to the apparatus, one or more signals that are transmitted by one or more transmitters of the UE; at least one instruction configured to cause the apparatus to measure, via a second directional receive antenna array coupled to the apparatus, the one or more signals that are transmitted by the one or more transmitters, wherein the first and second directional receive antenna arrays are towards different directions; at least one instruction configured to cause the apparatus to determine a first representative value for the first directional receive antenna array based on some or all of the measurements of the one or more signals by the first directional receive antenna array; at least one instruction configured to cause the apparatus to determine a second representative value for the second directional receive antenna array based on some or all of the measurements of the one or more signals by the second directional receive antenna array; and at least one instruction configured to cause the apparatus to determine whether the UE is within a given region based on the first and second representative values, wherein the first directional receive antenna is oriented towards an interior region of an enclosed environment and the second directional receive antenna is oriented towards an exterior region of the enclosed environment, wherein the given region is the enclosed environment. 29. The non-transitory computer-readable medium of claim 28, further comprising:
at least one instruction configured to cause the apparatus to block, permit or perform one or more operations based on whether the UE is determined to be within the given region. 30. The non-transitory computer-readable medium of claim 28,
wherein the first and second directional receive antenna arrays are each oriented towards different regions of an enclosed environment, the given region including a portion of the enclosed environment, or wherein one of the first and second directional receive antenna arrays is oriented towards interior region of the enclosed environment and the other of the first and second directional receive antenna arrays is oriented towards an exterior region of the enclosed environment, the given region including the enclosed environment. | 2,600 |
9,786 | 9,786 | 15,350,460 | 2,632 | Some embodiments disclosed herein relate to a transmitter. The transmitter includes a digital modulator adapted to provide a digital modulated RF signal based on a multi-bit representation of data and a multi-bit representation of a carrier wave. A digital-to-analog converter (DAC) is adapted to generate an analog modulated RF signal based on the digital modulated RF signal. A resonant circuit coupled to an output of the DAC and adapted to filter undesired frequency components from the analog modulated RF signal. | 1. A method of generating a radio-frequency (RF) signal in a transmitter, comprising:
using a digital polar modulator to generate a multi-bit representation of the RF signal, where the multi-bit representation of the RF signal changes in time according to a sampling rate; and converting the multi-bit representation of the RF signal into a time-varying analog RF signal by using a digital to analog converter (DAC) having a resonant circuit is coupled to an output of the DAC. 2. The method of claim 1, wherein generating the multi-bit representation of the RF signal comprises:
providing a multi-bit representation of a frequency control word; providing a multi-bit representation of phase data that changes in time according to the sampling rate; providing a multi-bit phase-modulated signal based on both the frequency control word and the phase data, wherein the multi-bit phase modulated signal changes in time according to the sampling rate. 3. The method of claim 2, wherein generating the multi-bit representation of the RF signal further comprises:
altering the multi-bit phase-modulated signal based on amplitude data to provide a multi-bit amplitude-and-phase-modulated signal that changes in time according to the sampling rate. 4. A transmitter, comprising:
a baseband processor to provide a multi-bit representation of a frequency control word and I- and Q-data; a cordic to convert the I- and Q-data into polar data which includes a phase component, θ(t), and an amplitude component, r(t); a digital polar modulator to provide a digital modulated RF signal based on both the polar data and the multi-bit representation of the frequency control word; a digital-to-analog converter (DAC) coupled to an output of the digital polar modulator and configured to generate an analog modulated RF signal based on the digital modulated RF signal; and a resonant circuit coupled to an output of the DAC, the resonant circuit to filter undesired frequency components from the analog modulated RF signal. 5. The transmitter of claim 4, where the output of the DAC is a single-ended output. 6. The transmitter of claim 4, where the output of the DAC is a differential output. 7. The transmitter of claim 6, where the DAC comprises:
a first variable current source coupled to a first leg of the differential output; and a second variable current source coupled to a second leg of the differential output. 8. The transmitter of claim 7:
where the first variable current source comprises multiple current sources coupled to the first leg of the differential output; where the second variable current source comprises multiple current sources coupled to the second leg of the differential output; wherein the multiple current sources in the first variable current source and the multiple current sources in the second variable current source are arranged to cooperatively deliver different currents to the first and second legs of the differential output for different values of the digital modulated RF signal. 9. The transmitter of claim 7, where the first variable current source comprises:
a first switching element in series with a first current element, where the first switching element is coupled between the first current element and the first leg of the differential output; and a second switching element in series with a second current element, where the second switching element is coupled between the second current element and the first leg of the differential output; wherein the first and second switching elements are arranged to cooperatively deliver different currents to the first leg of the differential output for different values of the digital modulated RF signal. 10. The transmitter of claim 4, where the resonant circuit comprises at least one of the following three elements: a surface-acoustic wave (SAW) filter, a bulk acoustic wave (BAW) filter, or a duplexer. 11. The transmitter of claim 4, where the resonant circuit comprises an LC circuit including an inductor in parallel with a capacitor. 12. The transmitter of claim 11, where the capacitor in the LC circuit comprises a bank of capacitors that are arranged to provide the LC circuit with an adjustable capacitance. 13. A circuit that includes a digital modulator, the digital modulator comprising:
a differentiator to receive successive phase values at a sampling rate and provide differentiated phase values based on the phase values; an adder to provide successive instantaneous phase offset values at the sampling rate based on both the differentiated phase values and a frequency control word; a phase accumulator to provide successive instantaneous phase values at the sampling rate based on the instantaneous phase offset values; and an angle-to-amplitude converter to convert the instantaneous phase values to a multi-bit representation of a phase modulated wave at the sampling rate. 14. The circuit of claim 13, where the digital modulator further comprises:
a multiplier to receive successive amplitude values and the multi-bit representation of the phase modulated wave at the sampling rate; and the multiplier to output a multi-bit amplitude-and-phase-modulated signal that changes in time according to the sampling rate. 15. The circuit of claim 13, further comprising:
a digital-to-analog converter (DAC) to generate an analog modulated RF signal based on the multi-bit representation of the phase modulated wave. 16. The circuit of claim 15, further comprising:
a digital up-conversion element operably coupled between the DAC and the angle-to-amplitude converter, where the digital up-conversion element increases a frequency of the phase modulated wave from a first frequency to a second frequency. 17. The circuit of claim 15, further comprising:
a resonant circuit coupled to an output of the DAC and adapted to filter undesired frequency components from the analog modulated RF signal. 18. The circuit of claim 14, further comprising:
a baseband processor to provide both the successive phase values and a frequency control word to the digital modulator, where the frequency control word is associated with a frequency channel over which the phase modulated wave is to be transmitted. 19. The circuit of claim 18, where the baseband processor provides the phase values in I-Q format, the circuit further comprising:
a cordic to convert the phase values in I-Q format to phase values in polar format. 20. A method of generating a radio-frequency (RF) signal in a transmitter, comprising:
providing a multi-bit representation of a frequency control word, where the frequency control word is associated with a carrier frequency; providing a multi-bit representation of data in polar format according to a sampling rate; providing a multi-bit digital polar modulated RF signal based on both the multi-bit representation of data in polar format and the multi-bit representation of the frequency control word; converting the multi-bit digital polar modulated RF signal into an analog RF signal; removing unwanted frequency components from the analog RF signal by using a resonant circuit. 21. The method of claim 1, where the resonant circuit comprises an LC circuit including an inductor in parallel with a capacitor. | Some embodiments disclosed herein relate to a transmitter. The transmitter includes a digital modulator adapted to provide a digital modulated RF signal based on a multi-bit representation of data and a multi-bit representation of a carrier wave. A digital-to-analog converter (DAC) is adapted to generate an analog modulated RF signal based on the digital modulated RF signal. A resonant circuit coupled to an output of the DAC and adapted to filter undesired frequency components from the analog modulated RF signal.1. A method of generating a radio-frequency (RF) signal in a transmitter, comprising:
using a digital polar modulator to generate a multi-bit representation of the RF signal, where the multi-bit representation of the RF signal changes in time according to a sampling rate; and converting the multi-bit representation of the RF signal into a time-varying analog RF signal by using a digital to analog converter (DAC) having a resonant circuit is coupled to an output of the DAC. 2. The method of claim 1, wherein generating the multi-bit representation of the RF signal comprises:
providing a multi-bit representation of a frequency control word; providing a multi-bit representation of phase data that changes in time according to the sampling rate; providing a multi-bit phase-modulated signal based on both the frequency control word and the phase data, wherein the multi-bit phase modulated signal changes in time according to the sampling rate. 3. The method of claim 2, wherein generating the multi-bit representation of the RF signal further comprises:
altering the multi-bit phase-modulated signal based on amplitude data to provide a multi-bit amplitude-and-phase-modulated signal that changes in time according to the sampling rate. 4. A transmitter, comprising:
a baseband processor to provide a multi-bit representation of a frequency control word and I- and Q-data; a cordic to convert the I- and Q-data into polar data which includes a phase component, θ(t), and an amplitude component, r(t); a digital polar modulator to provide a digital modulated RF signal based on both the polar data and the multi-bit representation of the frequency control word; a digital-to-analog converter (DAC) coupled to an output of the digital polar modulator and configured to generate an analog modulated RF signal based on the digital modulated RF signal; and a resonant circuit coupled to an output of the DAC, the resonant circuit to filter undesired frequency components from the analog modulated RF signal. 5. The transmitter of claim 4, where the output of the DAC is a single-ended output. 6. The transmitter of claim 4, where the output of the DAC is a differential output. 7. The transmitter of claim 6, where the DAC comprises:
a first variable current source coupled to a first leg of the differential output; and a second variable current source coupled to a second leg of the differential output. 8. The transmitter of claim 7:
where the first variable current source comprises multiple current sources coupled to the first leg of the differential output; where the second variable current source comprises multiple current sources coupled to the second leg of the differential output; wherein the multiple current sources in the first variable current source and the multiple current sources in the second variable current source are arranged to cooperatively deliver different currents to the first and second legs of the differential output for different values of the digital modulated RF signal. 9. The transmitter of claim 7, where the first variable current source comprises:
a first switching element in series with a first current element, where the first switching element is coupled between the first current element and the first leg of the differential output; and a second switching element in series with a second current element, where the second switching element is coupled between the second current element and the first leg of the differential output; wherein the first and second switching elements are arranged to cooperatively deliver different currents to the first leg of the differential output for different values of the digital modulated RF signal. 10. The transmitter of claim 4, where the resonant circuit comprises at least one of the following three elements: a surface-acoustic wave (SAW) filter, a bulk acoustic wave (BAW) filter, or a duplexer. 11. The transmitter of claim 4, where the resonant circuit comprises an LC circuit including an inductor in parallel with a capacitor. 12. The transmitter of claim 11, where the capacitor in the LC circuit comprises a bank of capacitors that are arranged to provide the LC circuit with an adjustable capacitance. 13. A circuit that includes a digital modulator, the digital modulator comprising:
a differentiator to receive successive phase values at a sampling rate and provide differentiated phase values based on the phase values; an adder to provide successive instantaneous phase offset values at the sampling rate based on both the differentiated phase values and a frequency control word; a phase accumulator to provide successive instantaneous phase values at the sampling rate based on the instantaneous phase offset values; and an angle-to-amplitude converter to convert the instantaneous phase values to a multi-bit representation of a phase modulated wave at the sampling rate. 14. The circuit of claim 13, where the digital modulator further comprises:
a multiplier to receive successive amplitude values and the multi-bit representation of the phase modulated wave at the sampling rate; and the multiplier to output a multi-bit amplitude-and-phase-modulated signal that changes in time according to the sampling rate. 15. The circuit of claim 13, further comprising:
a digital-to-analog converter (DAC) to generate an analog modulated RF signal based on the multi-bit representation of the phase modulated wave. 16. The circuit of claim 15, further comprising:
a digital up-conversion element operably coupled between the DAC and the angle-to-amplitude converter, where the digital up-conversion element increases a frequency of the phase modulated wave from a first frequency to a second frequency. 17. The circuit of claim 15, further comprising:
a resonant circuit coupled to an output of the DAC and adapted to filter undesired frequency components from the analog modulated RF signal. 18. The circuit of claim 14, further comprising:
a baseband processor to provide both the successive phase values and a frequency control word to the digital modulator, where the frequency control word is associated with a frequency channel over which the phase modulated wave is to be transmitted. 19. The circuit of claim 18, where the baseband processor provides the phase values in I-Q format, the circuit further comprising:
a cordic to convert the phase values in I-Q format to phase values in polar format. 20. A method of generating a radio-frequency (RF) signal in a transmitter, comprising:
providing a multi-bit representation of a frequency control word, where the frequency control word is associated with a carrier frequency; providing a multi-bit representation of data in polar format according to a sampling rate; providing a multi-bit digital polar modulated RF signal based on both the multi-bit representation of data in polar format and the multi-bit representation of the frequency control word; converting the multi-bit digital polar modulated RF signal into an analog RF signal; removing unwanted frequency components from the analog RF signal by using a resonant circuit. 21. The method of claim 1, where the resonant circuit comprises an LC circuit including an inductor in parallel with a capacitor. | 2,600 |
9,787 | 9,787 | 15,135,612 | 2,616 | A display device for a vehicle includes: a first image control section that displays a first entire image in a certain entire display region; a second image control section that displays a second entire image that is changed from the first entire image in the certain entire display region; and an intermediate image control section that displays a single intermediate entire image indicating an intermediate stage of a change between the first entire image and the second entire image in the certain entire display region. The second image control section displays a final display state in which a certain mark is displayed in a certain portion within the certain display region as the second entire image. The intermediate image control section displays the intermediate entire image having an afterimage of the certain mark being extended in a track direction on a track of the certain mark moving to the certain portion. | 1. A display device for a vehicle, which performs information display by a liquid crystal display, the display device comprising:
a first image control section that displays a first entire image in a certain entire display region of the liquid crystal display; a second image control section that displays a second entire image that is changed from the first entire image displayed by the first image control section in the certain entire display region; and an intermediate image control section that displays a single intermediate entire image indicating an intermediate stage of a change between the first entire image by the first image control section and the second entire image by the second image control section in the certain entire display region, wherein the second image control section displays a final display state in which a certain mark is displayed in a certain portion within the certain display region as the second entire image, and wherein the intermediate image control section displays the intermediate entire image having an afterimage of the certain mark being extended in a track direction on a track of the certain mark moving to the certain portion. 2. The display device for a vehicle according to claim 1,
wherein the afterimage is a gradation image in which gradation is changed in an extended direction. 3. The display device for a vehicle according to claim 1,
wherein the second image control section displays the second entire image changed from the first entire image when instruction contents from a vehicle crew are input thereto. 4. The display device for a vehicle according to claim 1,
wherein the first image control section and the second image control section display a plurality of state marks corresponding to a plurality of states of the vehicle in each entire image as a plurality of the certain marks, and wherein the second image control section displays the second entire image in which an arrangement of the plurality of state marks is changed from the first entire image. 5. The display device for a vehicle according to claim 1,
wherein each of the first image control section and the second image control section displays a still image whose position is not changed in the first entire image and the second entire image, and wherein the intermediate image control section displays the afterimage overlapping the still images in an overlapping manner. | A display device for a vehicle includes: a first image control section that displays a first entire image in a certain entire display region; a second image control section that displays a second entire image that is changed from the first entire image in the certain entire display region; and an intermediate image control section that displays a single intermediate entire image indicating an intermediate stage of a change between the first entire image and the second entire image in the certain entire display region. The second image control section displays a final display state in which a certain mark is displayed in a certain portion within the certain display region as the second entire image. The intermediate image control section displays the intermediate entire image having an afterimage of the certain mark being extended in a track direction on a track of the certain mark moving to the certain portion.1. A display device for a vehicle, which performs information display by a liquid crystal display, the display device comprising:
a first image control section that displays a first entire image in a certain entire display region of the liquid crystal display; a second image control section that displays a second entire image that is changed from the first entire image displayed by the first image control section in the certain entire display region; and an intermediate image control section that displays a single intermediate entire image indicating an intermediate stage of a change between the first entire image by the first image control section and the second entire image by the second image control section in the certain entire display region, wherein the second image control section displays a final display state in which a certain mark is displayed in a certain portion within the certain display region as the second entire image, and wherein the intermediate image control section displays the intermediate entire image having an afterimage of the certain mark being extended in a track direction on a track of the certain mark moving to the certain portion. 2. The display device for a vehicle according to claim 1,
wherein the afterimage is a gradation image in which gradation is changed in an extended direction. 3. The display device for a vehicle according to claim 1,
wherein the second image control section displays the second entire image changed from the first entire image when instruction contents from a vehicle crew are input thereto. 4. The display device for a vehicle according to claim 1,
wherein the first image control section and the second image control section display a plurality of state marks corresponding to a plurality of states of the vehicle in each entire image as a plurality of the certain marks, and wherein the second image control section displays the second entire image in which an arrangement of the plurality of state marks is changed from the first entire image. 5. The display device for a vehicle according to claim 1,
wherein each of the first image control section and the second image control section displays a still image whose position is not changed in the first entire image and the second entire image, and wherein the intermediate image control section displays the afterimage overlapping the still images in an overlapping manner. | 2,600 |
9,788 | 9,788 | 15,220,614 | 2,674 | Described are methods that allow credentials of a first client station to authenticate a second client station. An exemplary method includes associating a first client station with a second client station, the first client station including credential information, the associating authorizing the second client station to use the credential information, transmitting, by the second client station, an association request to a network, the network utilizing the credential information to authorize a connection, the second client station configured to perform a proxy functionality for requests received from the network to be forwarded to the first client station and responses received from the first client station to be forwarded to the network, determining, by the network, whether the credential information received from the second client station is authenticated and establishing a connection between the second client station and the network using the credential information of the first client station. | 1-19. (canceled) 20. A method, comprising:
at a client station:
transmitting an identification request, received from a network to which the client station is attempting to connect, to a further client station, the further client station including credential information for the network;
receiving an identification response from the further client station including the credential information;
transmitting the identification response to the network; and
establishing a connection between the client station and the network using the credential information of the further client station. 21. The method of claim 20, wherein the client station and the further client station are associated with an account that authorizes the client station to use the credential information of the further client station. 22. The method of claim 20, wherein the identification request was received in response to an open association request. 23. The method of claim 20, wherein the identification response includes an international mobile subscriber identity (IMSI) and a related key. 24. The method of claim 20, further comprising:
receiving a challenge request from the network; transmitting the challenge request to the further client station; receiving a challenge response from the further client station; and transmitting the challenge response to the network. 25. The method of claim 24, wherein the challenge request includes an Authentication and Key Agreement (AKA) challenge. 26. The method of claim 24, wherein the challenge request includes a request/SIM/start. 27. The method of claim 26, wherein the challenge response includes a request/SIM/start nonce. 28. The method of claim 26, further comprising:
receiving a second challenge request from the network, wherein the second challenge request is a request/SIM/challenge; transmitting the second challenge request to the further client station; receiving a second challenge response from the further client station; and transmitting the second challenge response to the network. 29. The method of claim 20, further comprising:
notifying the further client station when the connection between the client station and the network is established. 30. The method of claim 20, wherein the identification request and the identification response is based on an Extensible Authentication Protocol (EAP). 31. A method, comprising:
at a client station:
transmitting an association request to a network;
receiving an identification request from the network;
transmitting an identification response to the network, wherein the identification response includes credential information for a further client station that is associated with the client station such that the credential information is authorized to be used by the client station; and
establishing a connection between the client station and the network using the credential information of the further client station. 32. The method of claim 31, further comprising:
storing, in a memory of the client station, the credential information of the further client station. 33. The method of claim 31, further comprising:
storing, in a memory of the client station, information indicating that the network is a known network for which the credential information is used to connect thereto. 34. The method of claim 31, wherein the network is a wireless local area network (WLAN) and comprises one of a private WLAN requiring password information for connection thereto or a HotSpot requiring Subscriber Identity Module (SIM) information for connection thereto. 35. A method, comprising:
at a client station:
receiving, from a further client station, an identification request related to connecting to a network, wherein the client station and the further client station are associated such that credential information of the client station is authorized to be used by the further client station to connect to the network;
generating an identification response to the identification request, the identification response being a function of the credential information; and
transmitting the identification response to the further client station. 36. The method of claim 35, further comprising:
transmitting an identifier of each network to which the client station has connected using the credential information. 37. The method of claim 35, further comprising:
receiving, from the further client station, a further request that was generated by the network; generating a response to the further request; transmitting the response to the further request to the further client station. 38. The method of claim 37, wherein the further request includes an Authentication and Key Agreement (AKA) challenge and generating the response includes executing an AKA algorithm. 39. The method of claim 37, wherein the further request includes a request/SIM/start and generating the response includes verifying the request/SIM/start. | Described are methods that allow credentials of a first client station to authenticate a second client station. An exemplary method includes associating a first client station with a second client station, the first client station including credential information, the associating authorizing the second client station to use the credential information, transmitting, by the second client station, an association request to a network, the network utilizing the credential information to authorize a connection, the second client station configured to perform a proxy functionality for requests received from the network to be forwarded to the first client station and responses received from the first client station to be forwarded to the network, determining, by the network, whether the credential information received from the second client station is authenticated and establishing a connection between the second client station and the network using the credential information of the first client station.1-19. (canceled) 20. A method, comprising:
at a client station:
transmitting an identification request, received from a network to which the client station is attempting to connect, to a further client station, the further client station including credential information for the network;
receiving an identification response from the further client station including the credential information;
transmitting the identification response to the network; and
establishing a connection between the client station and the network using the credential information of the further client station. 21. The method of claim 20, wherein the client station and the further client station are associated with an account that authorizes the client station to use the credential information of the further client station. 22. The method of claim 20, wherein the identification request was received in response to an open association request. 23. The method of claim 20, wherein the identification response includes an international mobile subscriber identity (IMSI) and a related key. 24. The method of claim 20, further comprising:
receiving a challenge request from the network; transmitting the challenge request to the further client station; receiving a challenge response from the further client station; and transmitting the challenge response to the network. 25. The method of claim 24, wherein the challenge request includes an Authentication and Key Agreement (AKA) challenge. 26. The method of claim 24, wherein the challenge request includes a request/SIM/start. 27. The method of claim 26, wherein the challenge response includes a request/SIM/start nonce. 28. The method of claim 26, further comprising:
receiving a second challenge request from the network, wherein the second challenge request is a request/SIM/challenge; transmitting the second challenge request to the further client station; receiving a second challenge response from the further client station; and transmitting the second challenge response to the network. 29. The method of claim 20, further comprising:
notifying the further client station when the connection between the client station and the network is established. 30. The method of claim 20, wherein the identification request and the identification response is based on an Extensible Authentication Protocol (EAP). 31. A method, comprising:
at a client station:
transmitting an association request to a network;
receiving an identification request from the network;
transmitting an identification response to the network, wherein the identification response includes credential information for a further client station that is associated with the client station such that the credential information is authorized to be used by the client station; and
establishing a connection between the client station and the network using the credential information of the further client station. 32. The method of claim 31, further comprising:
storing, in a memory of the client station, the credential information of the further client station. 33. The method of claim 31, further comprising:
storing, in a memory of the client station, information indicating that the network is a known network for which the credential information is used to connect thereto. 34. The method of claim 31, wherein the network is a wireless local area network (WLAN) and comprises one of a private WLAN requiring password information for connection thereto or a HotSpot requiring Subscriber Identity Module (SIM) information for connection thereto. 35. A method, comprising:
at a client station:
receiving, from a further client station, an identification request related to connecting to a network, wherein the client station and the further client station are associated such that credential information of the client station is authorized to be used by the further client station to connect to the network;
generating an identification response to the identification request, the identification response being a function of the credential information; and
transmitting the identification response to the further client station. 36. The method of claim 35, further comprising:
transmitting an identifier of each network to which the client station has connected using the credential information. 37. The method of claim 35, further comprising:
receiving, from the further client station, a further request that was generated by the network; generating a response to the further request; transmitting the response to the further request to the further client station. 38. The method of claim 37, wherein the further request includes an Authentication and Key Agreement (AKA) challenge and generating the response includes executing an AKA algorithm. 39. The method of claim 37, wherein the further request includes a request/SIM/start and generating the response includes verifying the request/SIM/start. | 2,600 |
9,789 | 9,789 | 14,950,664 | 2,616 | A system and method for storing a plurality data points, each data point representing a geographic location, a first set of data points representing a first geometric object and a second set of data points representing a second geometric object. The system and method then remove a first data point from the first set of data points representing the first geometric object based on at least a distance between a first location represented by the first data point and a second location represented by a second data point of the second set of data points representing a second geometric object. | 1-23. (canceled) 24. A system, comprising:
a data storage mechanism that stores data points, wherein each data point comprises of a latitudinal and longitudinal coordinate; and a processor that executes instructions to cause the processor to perform operations, comprising:
removing a first data point from a first set of the data points based on i) a first distance between a first coordinate represented by the first data point and a second coordinate represented by a second data point of a second set of the data points, ii) a second distance between the first coordinate represented by the first data point and a third coordinate represented by a third data point of the first set of data points, and iii) a relationship between the first and second distances. 25. The system of claim 24, wherein the first set of data points represents a first geometric object and the second set of data points represents a second geometric object. 26. The system of claim 25, wherein the operations further comprise:
generating a viewable display including the first geometric object without the first data point; and generating the viewable display including the second geometric object. 27. The system of claim 26, further comprising:
a display displaying the viewable display of the first and second geometric objects. 28. The system of claim 26, wherein a value of one of the first distance, the second distance or the relationship between the first and second distances is based on a level of detail reduction of the first geometric object in the viewable display. 29. The system of claim 24, wherein after the first data point is removed, the first set of data points are re-stored in the data storage mechanism without the first data point. 30. The system of claim 25, wherein the operations further comprise:
removing a fourth data point from the first set of data points based on i) a third distance between a fourth coordinate represented by the fourth data point and a fifth coordinate represented by a fifth data point of the second set of data points, ii) a fourth distance between the fourth coordinate represented by the fourth data point and a sixth coordinate represented by a sixth data point of the first set of data points, and iii) a relationship between the third and fourth distances. 31. The system of claim 30, wherein the second data point and the fifth data point are the same data points. 32. The system of claim 30, wherein the removing processes are repeated until a desired level of detail reduction of the first geometric object is achieved. 33. A method, comprising:
storing a plurality of data points, wherein each data point comprises of a latitudinal and longitudinal coordinate; removing a first data point from a first set of data points based on i) a first distance between a first coordinate represented by the first data point and a second coordinate represented by a second data point of a second set of data points, ii) a second distance between the first coordinate represented by the first data point and a third coordinate represented by a third data point of the first set of data points, and iii) a relationship between the first and second distances. 34. The method of claim 33, wherein the first set of data points represents a first geometric object and the second set of data points represents a second geometric object. 35. The method of claim 34, wherein the removing is further based on a dynamic threshold value, the dynamic threshold value being set based on a zoom level at which the first and second geometric objects are to be displayed. 36. The method of claim 33, further comprising:
generating a viewable display including the first geometric object without the first data point; and generating the viewable display including the second geometric object. 37. The method of claim 33, further comprising:
receiving an indication defining the second geometric object as a directorial object. 38. The method of claim 33, wherein the removing is based on the following formula:
∥(P n ,P last)|>T*ln(1+∥(P n ,O)∥),
where Pn is the first data point, Plast is a further data point of the first geometric object, T is a threshold value and O is the second data point. 39. A method, comprising:
determining a first distance between a first location represented by a first data point of a first set of data points and a second location represented by a second data point of the first set of data points, wherein each data point represents a latitude and longitude coordinate; determining a second distance between the first location and a third location represented by a third data point of the second set of data points; and determining a relationship between the first and second distances. 40. The method of claim 39, wherein each set of data points is organized into a geometric object. 41. The method of claim 39, wherein the relationship between the first and second distances is determined based on the following formula:
∥(P n ,P last)|>T*ln(1+∥(P n ,O)∥),
where Pn is the first data point, Plast is the second data point, T is a threshold value and O is the third data point. 42. The method of claim 39, further comprising:
removing the first data point from the first set of data points based on the relationship between the first and second distances; determining, when the first data point is removed from the first set of data points, a third distance between a fourth location represented by a fourth data point of the first set of data points and the second location; determining a fourth distance between the fourth location and the third location; and determining a further relationship between the third and fourth distances, wherein the fourth data point is one of removed from the first set of data points and maintained in the first set of data points based on the further relationship between the third and fourth distances. 43. The non-transitory memory of claim 39, wherein the method further comprises:
maintaining the first data point in the first set of data points based on the relationship between the first and second distances; determining, when the first data point is maintained in the first set of data points, a third distance between a fourth location represented by a fourth data point of the first set of data points and the first location; determining a fourth distance between the fourth location and the third location; and determining a further relationship between the third and fourth distances, wherein the fourth data point is one of removed from the first set of data points and maintained in the first set of data points based on the further relationship between the third and fourth distances. | A system and method for storing a plurality data points, each data point representing a geographic location, a first set of data points representing a first geometric object and a second set of data points representing a second geometric object. The system and method then remove a first data point from the first set of data points representing the first geometric object based on at least a distance between a first location represented by the first data point and a second location represented by a second data point of the second set of data points representing a second geometric object.1-23. (canceled) 24. A system, comprising:
a data storage mechanism that stores data points, wherein each data point comprises of a latitudinal and longitudinal coordinate; and a processor that executes instructions to cause the processor to perform operations, comprising:
removing a first data point from a first set of the data points based on i) a first distance between a first coordinate represented by the first data point and a second coordinate represented by a second data point of a second set of the data points, ii) a second distance between the first coordinate represented by the first data point and a third coordinate represented by a third data point of the first set of data points, and iii) a relationship between the first and second distances. 25. The system of claim 24, wherein the first set of data points represents a first geometric object and the second set of data points represents a second geometric object. 26. The system of claim 25, wherein the operations further comprise:
generating a viewable display including the first geometric object without the first data point; and generating the viewable display including the second geometric object. 27. The system of claim 26, further comprising:
a display displaying the viewable display of the first and second geometric objects. 28. The system of claim 26, wherein a value of one of the first distance, the second distance or the relationship between the first and second distances is based on a level of detail reduction of the first geometric object in the viewable display. 29. The system of claim 24, wherein after the first data point is removed, the first set of data points are re-stored in the data storage mechanism without the first data point. 30. The system of claim 25, wherein the operations further comprise:
removing a fourth data point from the first set of data points based on i) a third distance between a fourth coordinate represented by the fourth data point and a fifth coordinate represented by a fifth data point of the second set of data points, ii) a fourth distance between the fourth coordinate represented by the fourth data point and a sixth coordinate represented by a sixth data point of the first set of data points, and iii) a relationship between the third and fourth distances. 31. The system of claim 30, wherein the second data point and the fifth data point are the same data points. 32. The system of claim 30, wherein the removing processes are repeated until a desired level of detail reduction of the first geometric object is achieved. 33. A method, comprising:
storing a plurality of data points, wherein each data point comprises of a latitudinal and longitudinal coordinate; removing a first data point from a first set of data points based on i) a first distance between a first coordinate represented by the first data point and a second coordinate represented by a second data point of a second set of data points, ii) a second distance between the first coordinate represented by the first data point and a third coordinate represented by a third data point of the first set of data points, and iii) a relationship between the first and second distances. 34. The method of claim 33, wherein the first set of data points represents a first geometric object and the second set of data points represents a second geometric object. 35. The method of claim 34, wherein the removing is further based on a dynamic threshold value, the dynamic threshold value being set based on a zoom level at which the first and second geometric objects are to be displayed. 36. The method of claim 33, further comprising:
generating a viewable display including the first geometric object without the first data point; and generating the viewable display including the second geometric object. 37. The method of claim 33, further comprising:
receiving an indication defining the second geometric object as a directorial object. 38. The method of claim 33, wherein the removing is based on the following formula:
∥(P n ,P last)|>T*ln(1+∥(P n ,O)∥),
where Pn is the first data point, Plast is a further data point of the first geometric object, T is a threshold value and O is the second data point. 39. A method, comprising:
determining a first distance between a first location represented by a first data point of a first set of data points and a second location represented by a second data point of the first set of data points, wherein each data point represents a latitude and longitude coordinate; determining a second distance between the first location and a third location represented by a third data point of the second set of data points; and determining a relationship between the first and second distances. 40. The method of claim 39, wherein each set of data points is organized into a geometric object. 41. The method of claim 39, wherein the relationship between the first and second distances is determined based on the following formula:
∥(P n ,P last)|>T*ln(1+∥(P n ,O)∥),
where Pn is the first data point, Plast is the second data point, T is a threshold value and O is the third data point. 42. The method of claim 39, further comprising:
removing the first data point from the first set of data points based on the relationship between the first and second distances; determining, when the first data point is removed from the first set of data points, a third distance between a fourth location represented by a fourth data point of the first set of data points and the second location; determining a fourth distance between the fourth location and the third location; and determining a further relationship between the third and fourth distances, wherein the fourth data point is one of removed from the first set of data points and maintained in the first set of data points based on the further relationship between the third and fourth distances. 43. The non-transitory memory of claim 39, wherein the method further comprises:
maintaining the first data point in the first set of data points based on the relationship between the first and second distances; determining, when the first data point is maintained in the first set of data points, a third distance between a fourth location represented by a fourth data point of the first set of data points and the first location; determining a fourth distance between the fourth location and the third location; and determining a further relationship between the third and fourth distances, wherein the fourth data point is one of removed from the first set of data points and maintained in the first set of data points based on the further relationship between the third and fourth distances. | 2,600 |
9,790 | 9,790 | 15,096,497 | 2,666 | Embodiments of the invention provide for biometric state switching. In one embodiment, a biometric state switching method includes storing in a database different fingerprints, each in connection with a different state of a corresponding application. Thereafter, an end user is authenticated into use of a computing device by receiving in a fingerprint scanner affixed to the computing device a scanned fingerprint, matching the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticating the end user into user of the computing device. Finally, subsequent to authentication, an application is executed in memory of the computing device, a state is identified for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and a state of execution of the application is set to the identified state. | 1. A biometric state switching method comprising:
storing in a database a multiplicity of different fingerprints, each in connection with a different state of a corresponding application; authenticating an end user into use of a computing device by receiving in a fingerprint scanner affixed to the computing device a scanned fingerprint, matching the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticating the end user into user of the computing device; and, subsequent to authentication, executing an application in memory of the computing device, identifying a state for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and setting a state of execution of the application to the identified state. 2. The method of claim 1, further comprising receiving a scan of a different fingerprint, matching the different fingerprint to a different state of execution of the application and setting the state of execution of the application to the different state. 3. The method of claim 1, wherein the multiplicity of different fingerprints is acquired by the fingerprint scanner. 4. The method of claim 1, wherein the database is disposed within the computing device. 5. The method of claim 2, wherein the different fingerprint is of a different finger of the end user. 6. A data processing system configured for biometric state switching, the system comprising:
a computing device with memory and at least one processor; a fingerprint scanner disposed upon an encasement of the computing device and coupled to the memory and at least one processor; a database coupled to the computing device and storing a multiplicity of different fingerprints, each in connection with a different state of a corresponding application; and, a biometric state switching module comprising program code which when executing in the memory by the at least one processor, is enabled to authenticate an end user into use of the computing device by receiving in the fingerprint scanner a scanned fingerprint, match the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticate the end user into user of the computing device, and subsequent to authentication, to execute an application in the memory of the computing device, identify a state for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and set a state of execution of the application to the identified state. 7. The system of claim 6, wherein the program code is further enabled to receive a scan of a different fingerprint, match the different fingerprint to a different state of execution of the application and set the state of execution of the application to the different state. 8. The system of claim 6, wherein the multiplicity of different fingerprints is acquired by the fingerprint scanner. 10. The system of claim 7, wherein the different fingerprint is of a different finger of the end user. 11. A computer program product for biometric state switching, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a device to cause the device to perform a method comprising:
storing in a database a multiplicity of different fingerprints, each in connection with a different state of a corresponding application; authenticating an end user into use of a computing device by receiving in a fingerprint scanner affixed to the computing device a scanned fingerprint, matching the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticating the end user into user of the computing device; and, subsequent to authentication, executing an application in memory of the computing device, identifying a state for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and setting a state of execution of the application to the identified state. 12. The computer program product of claim 11, wherein the method further comprises receiving a scan of a different fingerprint, matching the different fingerprint to a different state of execution of the application and setting the state of execution of the application to the different state. 13. The computer program product of claim 11, wherein the multiplicity of different fingerprints is acquired by the fingerprint scanner. 14. The computer program product of claim 11, wherein the database is disposed within the computing device. 15. The computer program product of claim 12, wherein the different fingerprint is of a different finger of the end user. | Embodiments of the invention provide for biometric state switching. In one embodiment, a biometric state switching method includes storing in a database different fingerprints, each in connection with a different state of a corresponding application. Thereafter, an end user is authenticated into use of a computing device by receiving in a fingerprint scanner affixed to the computing device a scanned fingerprint, matching the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticating the end user into user of the computing device. Finally, subsequent to authentication, an application is executed in memory of the computing device, a state is identified for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and a state of execution of the application is set to the identified state.1. A biometric state switching method comprising:
storing in a database a multiplicity of different fingerprints, each in connection with a different state of a corresponding application; authenticating an end user into use of a computing device by receiving in a fingerprint scanner affixed to the computing device a scanned fingerprint, matching the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticating the end user into user of the computing device; and, subsequent to authentication, executing an application in memory of the computing device, identifying a state for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and setting a state of execution of the application to the identified state. 2. The method of claim 1, further comprising receiving a scan of a different fingerprint, matching the different fingerprint to a different state of execution of the application and setting the state of execution of the application to the different state. 3. The method of claim 1, wherein the multiplicity of different fingerprints is acquired by the fingerprint scanner. 4. The method of claim 1, wherein the database is disposed within the computing device. 5. The method of claim 2, wherein the different fingerprint is of a different finger of the end user. 6. A data processing system configured for biometric state switching, the system comprising:
a computing device with memory and at least one processor; a fingerprint scanner disposed upon an encasement of the computing device and coupled to the memory and at least one processor; a database coupled to the computing device and storing a multiplicity of different fingerprints, each in connection with a different state of a corresponding application; and, a biometric state switching module comprising program code which when executing in the memory by the at least one processor, is enabled to authenticate an end user into use of the computing device by receiving in the fingerprint scanner a scanned fingerprint, match the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticate the end user into user of the computing device, and subsequent to authentication, to execute an application in the memory of the computing device, identify a state for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and set a state of execution of the application to the identified state. 7. The system of claim 6, wherein the program code is further enabled to receive a scan of a different fingerprint, match the different fingerprint to a different state of execution of the application and set the state of execution of the application to the different state. 8. The system of claim 6, wherein the multiplicity of different fingerprints is acquired by the fingerprint scanner. 10. The system of claim 7, wherein the different fingerprint is of a different finger of the end user. 11. A computer program product for biometric state switching, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a device to cause the device to perform a method comprising:
storing in a database a multiplicity of different fingerprints, each in connection with a different state of a corresponding application; authenticating an end user into use of a computing device by receiving in a fingerprint scanner affixed to the computing device a scanned fingerprint, matching the scanned fingerprint to one of the different fingerprints in the database, and if the scanned finger print matches one of the different fingerprints in the database, authenticating the end user into user of the computing device; and, subsequent to authentication, executing an application in memory of the computing device, identifying a state for the application stored in connection with the one of the different fingerprints matched to the scanned fingerprint, and setting a state of execution of the application to the identified state. 12. The computer program product of claim 11, wherein the method further comprises receiving a scan of a different fingerprint, matching the different fingerprint to a different state of execution of the application and setting the state of execution of the application to the different state. 13. The computer program product of claim 11, wherein the multiplicity of different fingerprints is acquired by the fingerprint scanner. 14. The computer program product of claim 11, wherein the database is disposed within the computing device. 15. The computer program product of claim 12, wherein the different fingerprint is of a different finger of the end user. | 2,600 |
9,791 | 9,791 | 14,701,953 | 2,683 | A system receives body area network (BAN) data from a BAN that is associated with a person as the person comes into the vicinity of an entry or access point to a structure or property. The system operates a door, gate, or other barrier at the entry or access point as a function of the BAN data. | 1. A system comprising:
a computer processor configured to:
receive body area network (BAN) data; and
operate a point of entry as a function of the BAN data. 2. The system of claim 1, wherein the computer processor is configured to:
transmit a signal, the signal configured for reception by a BAN, wherein the BAN is associated with a person; receive from the BAN an identification associated with the person; and operate the point of entry as a function of the identification. 3. The system of claim 2, wherein the computer processor comprises a server computer processor and a reader computer processor, and wherein:
the reader computer processor is configured to transmit the signal for reception by the BAN; the reader computer processor is configured to receive a response from the BAN, the response comprising the identification associated with the person; the reader computer processor is configured to transmit to the server computer processor the identification associated with the person; the server computer processor is configured to determine an access privilege to the point of entry for the person as a function of the identification of the person; the server computer processor is configured to transmit to the reader computer processor the access privilege to the point of entry for the person; and the reader computer processor is configured to receive the access privilege to the point of entry for the person and operate the point of entry as a function of the access privilege. 4. The system of claim 3, wherein the computer processor comprises a controller computer processor, and wherein the controller computer processor is coupled to the reader computer processor, and the reader computer processor is configured to send a signal to the controller computer processor to operate the point of entry as a function of the access privilege. 5. The system of claim 1, wherein the BAN comprises a transceiver and one or more sensors. 6. The system of claim 5, wherein the transceiver couples the BAN to the computer processor. 7. The system of claim 1, wherein the point of entry comprises a door, gate, safe, cabinet, closet, or room. 8. The system of claim 1, wherein the BAN data comprises biometric data, and the biometric data are used to operate the point of entry. 9. The system of claim 1, wherein the BAN data comprises identification data for a person associated with the BAN, and the identification data are used to operate the point of entry. 10. The system of claim 9, wherein the identification data comprise data relating to an implant in a person. 11. A computer readable medium comprising instruction that when executed by a processor execute a process comprising:
receiving body area network (BAN) data; and operating a point of entry as a function of the BAN data. 12. The computer readable medium of claim 11, comprising instructions for:
transmitting a signal, the signal configured for reception by a BAN, wherein the BAN is associated with a person; receiving from the BAN an identification associated with the person; and operating the point of entry as a function of the identification. 13. The computer readable medium of claim 12, comprising instructions for:
causing a reader computer processor to transmit the signal for reception by the BAN; causing the reader computer processor to receive a response from the BAN, the response comprising the identification associated with the person; causing the reader computer processor to transmit to a server computer processor the identification associated with the person; causing the server computer processor to determine an access privilege to the point of entry for the person as a function of the identification of the person; causing the server computer processor to transmit to the reader computer processor the access privilege to the point of entry for the person; and causing the reader computer processor to receive the access privilege to the point of entry for the person and operate the point of entry as a function of the access privilege. 14. The computer readable medium of claim 13, comprising instructions causing the reader computer processor to transmit a signal to a controller computer processor to operate the point of entry as a function of the access privilege. 15. The computer readable medium of claim 11, wherein the BAN data comprise biometric data, and the biometric data are used to operate the point of entry. 16. The computer readable medium of claim 11, wherein the BAN data comprise identification data for a person associated with the BAN, and the identification data are used to operate the point of entry. 17. A process comprising:
receiving body area network (BAN) data; and operating a point of entry as a function of the BAN data. 18. The process of claim 17, comprising:
transmitting a signal, the signal configured for reception by a BAN, wherein the BAN is associated with a person; receiving from the BAN an identification associated with the person; and operating the point of entry as a function of the identification. 19. The process of claim 18, comprising:
causing a reader computer processor to transmit the signal for reception by the BAN; causing the reader computer processor to receive a response from the BAN, the response comprising the identification associated with the person; causing the reader computer processor to transmit to a server computer processor the identification associated with the person; causing the server computer processor to determine an access privilege to the point of entry for the person as a function of the identification of the person; causing the server computer processor to transmit to the reader computer processor the access privilege to the point of entry for the person; and causing the reader computer processor to receive the access privilege to the point of entry for the person and operate the point of entry as a function of the access privilege. 20. The process of claim 19, comprising transmitting a signal to a controller computer processor to operate the point of entry as a function of the access privilege. | A system receives body area network (BAN) data from a BAN that is associated with a person as the person comes into the vicinity of an entry or access point to a structure or property. The system operates a door, gate, or other barrier at the entry or access point as a function of the BAN data.1. A system comprising:
a computer processor configured to:
receive body area network (BAN) data; and
operate a point of entry as a function of the BAN data. 2. The system of claim 1, wherein the computer processor is configured to:
transmit a signal, the signal configured for reception by a BAN, wherein the BAN is associated with a person; receive from the BAN an identification associated with the person; and operate the point of entry as a function of the identification. 3. The system of claim 2, wherein the computer processor comprises a server computer processor and a reader computer processor, and wherein:
the reader computer processor is configured to transmit the signal for reception by the BAN; the reader computer processor is configured to receive a response from the BAN, the response comprising the identification associated with the person; the reader computer processor is configured to transmit to the server computer processor the identification associated with the person; the server computer processor is configured to determine an access privilege to the point of entry for the person as a function of the identification of the person; the server computer processor is configured to transmit to the reader computer processor the access privilege to the point of entry for the person; and the reader computer processor is configured to receive the access privilege to the point of entry for the person and operate the point of entry as a function of the access privilege. 4. The system of claim 3, wherein the computer processor comprises a controller computer processor, and wherein the controller computer processor is coupled to the reader computer processor, and the reader computer processor is configured to send a signal to the controller computer processor to operate the point of entry as a function of the access privilege. 5. The system of claim 1, wherein the BAN comprises a transceiver and one or more sensors. 6. The system of claim 5, wherein the transceiver couples the BAN to the computer processor. 7. The system of claim 1, wherein the point of entry comprises a door, gate, safe, cabinet, closet, or room. 8. The system of claim 1, wherein the BAN data comprises biometric data, and the biometric data are used to operate the point of entry. 9. The system of claim 1, wherein the BAN data comprises identification data for a person associated with the BAN, and the identification data are used to operate the point of entry. 10. The system of claim 9, wherein the identification data comprise data relating to an implant in a person. 11. A computer readable medium comprising instruction that when executed by a processor execute a process comprising:
receiving body area network (BAN) data; and operating a point of entry as a function of the BAN data. 12. The computer readable medium of claim 11, comprising instructions for:
transmitting a signal, the signal configured for reception by a BAN, wherein the BAN is associated with a person; receiving from the BAN an identification associated with the person; and operating the point of entry as a function of the identification. 13. The computer readable medium of claim 12, comprising instructions for:
causing a reader computer processor to transmit the signal for reception by the BAN; causing the reader computer processor to receive a response from the BAN, the response comprising the identification associated with the person; causing the reader computer processor to transmit to a server computer processor the identification associated with the person; causing the server computer processor to determine an access privilege to the point of entry for the person as a function of the identification of the person; causing the server computer processor to transmit to the reader computer processor the access privilege to the point of entry for the person; and causing the reader computer processor to receive the access privilege to the point of entry for the person and operate the point of entry as a function of the access privilege. 14. The computer readable medium of claim 13, comprising instructions causing the reader computer processor to transmit a signal to a controller computer processor to operate the point of entry as a function of the access privilege. 15. The computer readable medium of claim 11, wherein the BAN data comprise biometric data, and the biometric data are used to operate the point of entry. 16. The computer readable medium of claim 11, wherein the BAN data comprise identification data for a person associated with the BAN, and the identification data are used to operate the point of entry. 17. A process comprising:
receiving body area network (BAN) data; and operating a point of entry as a function of the BAN data. 18. The process of claim 17, comprising:
transmitting a signal, the signal configured for reception by a BAN, wherein the BAN is associated with a person; receiving from the BAN an identification associated with the person; and operating the point of entry as a function of the identification. 19. The process of claim 18, comprising:
causing a reader computer processor to transmit the signal for reception by the BAN; causing the reader computer processor to receive a response from the BAN, the response comprising the identification associated with the person; causing the reader computer processor to transmit to a server computer processor the identification associated with the person; causing the server computer processor to determine an access privilege to the point of entry for the person as a function of the identification of the person; causing the server computer processor to transmit to the reader computer processor the access privilege to the point of entry for the person; and causing the reader computer processor to receive the access privilege to the point of entry for the person and operate the point of entry as a function of the access privilege. 20. The process of claim 19, comprising transmitting a signal to a controller computer processor to operate the point of entry as a function of the access privilege. | 2,600 |
9,792 | 9,792 | 14,835,693 | 2,643 | Embodiments manage data transfers using a plurality of data usage plans available to a computing device. Each of the data usage plans has data usage statistics representing an amount of network data consumed under the data usage plan. For each data transfer request received from applications executing on the computing device, a service executing on the computing device or in a cloud defines a network data transfer configuration for performing the data transfer request. The network data transfer configuration is defined based on, for example, the data usage plans, the data usage statistics, and the data transfer request to reduce transfer costs and/or provide a particular quality of service (QoS). | 1. A system for managing data transfers with a plurality of data usage plans, said system comprising:
a memory area associated with a mobile computing device, said memory area storing a plurality of data usage plans available to the mobile computing device, each of the plurality of data usage plans having data usage statistics associated therewith representing an amount of network data consumed under the data usage plan; and a processor programmed to:
receive a data transfer request from at least one of a plurality of applications executing on the mobile computing device; and
select, based at least on the plurality of data usage plans, the associated data usage statistics, and the received data transfer request, a mobile operator and at least one network connection available to the mobile computing device to initiate the received data transfer request. 2. The system of claim 1, wherein the processor is programmed to select the mobile operator by selecting mobile operator credentials to present to the selected network for the received data transfer request. 3. The system of claim 1, wherein the mobile computing device has dual subscriber identity modules (SIMs) with one of the SIMs in standby mode, and wherein the processor is programmed to select at least one of the SIMs to initiate the received data transfer request. 4. The system of claim 1, wherein the mobile computing device has dual subscriber identity modules (SIMs) with both of the SIMs being independently active, and wherein the processor is programmed to select at least one of the SIMs to initiate the received data transfer request. 5. The system of claim 1, wherein the received data transfer request represents an incoming voice call on a first data usage plan of a first subscriber identity module (SIM), and wherein the processor is programmed to answer the incoming voice call on a second data usage plan of a second SIM. 6. The system of claim 1, wherein the mobile computing device has a plurality of radios with at least two of the plurality of radios consuming a battery at differing rates, and wherein the processor is programmed to select the mobile operator and the network connection to balance network data usage against battery usage. 7. The system of claim 1, further comprising means for selecting among the plurality of data usage plans available to the mobile computing device for performing network data transfers. 8. A method comprising:
accessing a plurality of data usage plans available to a computing device, each of the plurality of data usage plans having data usage statistics associated therewith representing an amount of network data consumed under the data usage plan; receiving a data transfer request from at least one of a plurality of applications executing on the computing device; and selecting, based at least on the accessed plurality of data usage plans, the associated data usage statistics, and the received data transfer request, at least one of the accessed plurality of data usage plans and at least one network connection available to the computing device to initiate the received data transfer request. 9. The method of claim 8, further comprising presenting, to a user of the computing device based on the received data transfer request, at least one offer from a mobile operator to adjust the selected data usage plan for performance of the received data transfer request. 10. The method of claim 9, further comprising performing a cost-benefit analysis of the presented offer, and determining whether to recommend to the user that the user accept the presented offer based the performed cost-benefit analysis. 11. The method of claim 9, wherein presenting the offer includes presenting an offer to instantly increase a data transfer rate for the received data transfer request. 12. The method of claim 8, wherein selecting the data usage plan and the network connection represents a network transfer decision for the computing device, and further comprising crowdsourcing the network transfer decisions of a plurality of computing devices to identify patterns in the network transfer decisions. 13. The method of claim 8, wherein the plurality of data usage plans includes a first data usage plan with additional network data consumption available and a second data usage plan without additional network data consumption available, and wherein selecting the data usage plan comprises selecting the first data usage plan to prevent overage charges. 14. The method of claim 13, wherein the first data usage plan is associated with a family data usage plan. 15. The method of claim 8, wherein the selected data usage plan represents a first data usage plan, and further comprising performing an in-call switch over by selecting a second data usage plan after initiation of the received data transfer request on the first data usage plan on determining that an estimated transfer cost of the second data usage plan is less than an estimated transfer cost of the first data usage plan. 16. One or more computer storage media embodying computer-executable components, said components comprising:
a memory component that when executed causes at least one processor to store a plurality of data usage plans available to a computing device, each of the plurality of data usage plans having data usage statistics associated therewith representing an amount of network data consumed under the data usage plan; a communications interface component that when executed causes at least one processor to receive a data transfer request from at least one of a plurality of applications executing on the computing device; a prediction component that when executed causes at least one processor to calculate an estimated transfer cost for the data transfer request received by the communications interface component using each of the data usage plans stored by the memory component; and a filter component that when executed causes at least one processor to select, based at least on the estimated transfer cost calculated by the prediction component, one of the data usage plans stored by the memory component and one of a plurality of network connections available to the computing device to initiate the data transfer request received by the communications interface component. 17. The computer storage media of claim 16, wherein the prediction component further estimates a quality of service (QoS) associated with each of the plurality of data usage plans, and wherein the filter component further selects the one of the data usage plans and the one of a plurality of network connections based on the QoS estimated by the prediction component. 18. The computer storage media of claim 17, wherein the filter component performs a cost-benefit analysis for each of the data usage plans based on the estimated transfer cost and/or the estimated QoS. 19. The computer storage media of claim 18, wherein the filter component selects the data usage plan and the network connection with the lowest estimated transfer cost and/or highest estimated QoS. 20. The computer storage media of claim 16, wherein the filter component further selects one of a plurality of mobile operators based at least on the estimated transfer cost calculated by the prediction component. | Embodiments manage data transfers using a plurality of data usage plans available to a computing device. Each of the data usage plans has data usage statistics representing an amount of network data consumed under the data usage plan. For each data transfer request received from applications executing on the computing device, a service executing on the computing device or in a cloud defines a network data transfer configuration for performing the data transfer request. The network data transfer configuration is defined based on, for example, the data usage plans, the data usage statistics, and the data transfer request to reduce transfer costs and/or provide a particular quality of service (QoS).1. A system for managing data transfers with a plurality of data usage plans, said system comprising:
a memory area associated with a mobile computing device, said memory area storing a plurality of data usage plans available to the mobile computing device, each of the plurality of data usage plans having data usage statistics associated therewith representing an amount of network data consumed under the data usage plan; and a processor programmed to:
receive a data transfer request from at least one of a plurality of applications executing on the mobile computing device; and
select, based at least on the plurality of data usage plans, the associated data usage statistics, and the received data transfer request, a mobile operator and at least one network connection available to the mobile computing device to initiate the received data transfer request. 2. The system of claim 1, wherein the processor is programmed to select the mobile operator by selecting mobile operator credentials to present to the selected network for the received data transfer request. 3. The system of claim 1, wherein the mobile computing device has dual subscriber identity modules (SIMs) with one of the SIMs in standby mode, and wherein the processor is programmed to select at least one of the SIMs to initiate the received data transfer request. 4. The system of claim 1, wherein the mobile computing device has dual subscriber identity modules (SIMs) with both of the SIMs being independently active, and wherein the processor is programmed to select at least one of the SIMs to initiate the received data transfer request. 5. The system of claim 1, wherein the received data transfer request represents an incoming voice call on a first data usage plan of a first subscriber identity module (SIM), and wherein the processor is programmed to answer the incoming voice call on a second data usage plan of a second SIM. 6. The system of claim 1, wherein the mobile computing device has a plurality of radios with at least two of the plurality of radios consuming a battery at differing rates, and wherein the processor is programmed to select the mobile operator and the network connection to balance network data usage against battery usage. 7. The system of claim 1, further comprising means for selecting among the plurality of data usage plans available to the mobile computing device for performing network data transfers. 8. A method comprising:
accessing a plurality of data usage plans available to a computing device, each of the plurality of data usage plans having data usage statistics associated therewith representing an amount of network data consumed under the data usage plan; receiving a data transfer request from at least one of a plurality of applications executing on the computing device; and selecting, based at least on the accessed plurality of data usage plans, the associated data usage statistics, and the received data transfer request, at least one of the accessed plurality of data usage plans and at least one network connection available to the computing device to initiate the received data transfer request. 9. The method of claim 8, further comprising presenting, to a user of the computing device based on the received data transfer request, at least one offer from a mobile operator to adjust the selected data usage plan for performance of the received data transfer request. 10. The method of claim 9, further comprising performing a cost-benefit analysis of the presented offer, and determining whether to recommend to the user that the user accept the presented offer based the performed cost-benefit analysis. 11. The method of claim 9, wherein presenting the offer includes presenting an offer to instantly increase a data transfer rate for the received data transfer request. 12. The method of claim 8, wherein selecting the data usage plan and the network connection represents a network transfer decision for the computing device, and further comprising crowdsourcing the network transfer decisions of a plurality of computing devices to identify patterns in the network transfer decisions. 13. The method of claim 8, wherein the plurality of data usage plans includes a first data usage plan with additional network data consumption available and a second data usage plan without additional network data consumption available, and wherein selecting the data usage plan comprises selecting the first data usage plan to prevent overage charges. 14. The method of claim 13, wherein the first data usage plan is associated with a family data usage plan. 15. The method of claim 8, wherein the selected data usage plan represents a first data usage plan, and further comprising performing an in-call switch over by selecting a second data usage plan after initiation of the received data transfer request on the first data usage plan on determining that an estimated transfer cost of the second data usage plan is less than an estimated transfer cost of the first data usage plan. 16. One or more computer storage media embodying computer-executable components, said components comprising:
a memory component that when executed causes at least one processor to store a plurality of data usage plans available to a computing device, each of the plurality of data usage plans having data usage statistics associated therewith representing an amount of network data consumed under the data usage plan; a communications interface component that when executed causes at least one processor to receive a data transfer request from at least one of a plurality of applications executing on the computing device; a prediction component that when executed causes at least one processor to calculate an estimated transfer cost for the data transfer request received by the communications interface component using each of the data usage plans stored by the memory component; and a filter component that when executed causes at least one processor to select, based at least on the estimated transfer cost calculated by the prediction component, one of the data usage plans stored by the memory component and one of a plurality of network connections available to the computing device to initiate the data transfer request received by the communications interface component. 17. The computer storage media of claim 16, wherein the prediction component further estimates a quality of service (QoS) associated with each of the plurality of data usage plans, and wherein the filter component further selects the one of the data usage plans and the one of a plurality of network connections based on the QoS estimated by the prediction component. 18. The computer storage media of claim 17, wherein the filter component performs a cost-benefit analysis for each of the data usage plans based on the estimated transfer cost and/or the estimated QoS. 19. The computer storage media of claim 18, wherein the filter component selects the data usage plan and the network connection with the lowest estimated transfer cost and/or highest estimated QoS. 20. The computer storage media of claim 16, wherein the filter component further selects one of a plurality of mobile operators based at least on the estimated transfer cost calculated by the prediction component. | 2,600 |
9,793 | 9,793 | 14,971,979 | 2,645 | A method and an electronic device employing an uninterrupted media play and call management system (UMPCMS) are provided for managing an incoming call without interrupting playing of media on the electronic device. The UMPCMS receives an indication of the incoming call, generates a notification object with one or more call management options for the incoming call in a configurable format based on preconfigured criteria, and overlays the generated notification object with the call management options on a graphical user interface (GUI) provided by the UMPCMS, while supporting continued playing of the media on the electronic device via the GUI without interruption by the incoming call. The UMPCMS receives a selection of a call management option from the electronic device and performs one or more executable actions on the incoming call and/or the playing of the media on the electronic device based on the received selection of the call management option. | 1. A method for managing an incoming call during playing of media on a user device, without interrupting the playing of the media on the user device, the method employing an uninterrupted media play and call management system executable by at least one processor configured to execute computer program instructions for performing the method, the method comprising:
receiving an indication of the incoming call by the uninterrupted media play and call management system, during the playing of the media on the user device via a graphical user interface provided by the uninterrupted media play and call management system on the user device; generating a notification object with one or more of a plurality of call management options for the incoming call in one of a plurality of configurable formats based on preconfigured criteria by the uninterrupted media play and call management system; overlaying the generated notification object with the one or more of the call management options on the graphical user interface by the uninterrupted media play and call management system, while supporting continued playing of the media on the user device via the graphical user interface without interruption by the incoming call; receiving a selection of one of the one or more of the call management options through the overlaid notification object from the user device via the graphical user interface and processing the received selection of the one of the one or more of the call management options, by the uninterrupted media play and call management system; and performing one or more executable actions on one or more of the incoming call and the playing of the media on the user device by the uninterrupted media play and call management system based on the processed selection of the one of the one or more of the call management options. 2. The method of claim 1, wherein the notification object comprises one or more of a plurality of identifiers of a caller of the incoming call, wherein the identifiers comprise a name, a contact number, and an image of the caller. 3. The method of claim 1, wherein the notification object in the one of the configurable formats is one of a calendar object, a stamp object, and a blinder object. 4. The method of claim 1, wherein the notification object in the one of the configurable formats comprises one or more interface elements for receiving the selection of the one of the one or more of the call management options from the user device via the graphical user interface. 5. The method of claim 1, wherein the generation of the notification object comprises configuring a calendar object to log the incoming call, the playing of the media on the user device, and the one or more executable actions performed on the one or more of the incoming call and the playing of the media on the user device. 6. The method of claim 5, wherein the calendar object is further configured to log data comprising messages communicated between users, recordings of the media in one of the user device, a cloud computing environment, and a combination thereof, ratings of quality of the media, images, and social media, and create and schedule recording of the media and user events. 7. The method of claim 1, wherein the preconfigured criteria for the generation of the notification object with the one or more of the call management options comprise one of blocking the incoming call, allowing only text communication, accepting the incoming call while supporting the continued playing of the media on the user device via the graphical user interface without the interruption by the incoming call, and vibrating the user device. 8. The method of claim 1, wherein the call management options comprise:
accepting the incoming call; rejecting the incoming call; sending a text communication to a caller of the incoming call; sending a social media communication to a caller of the incoming call; sending an automated message indicating an unavailability of the user device for any communication for a duration of the playing of the media; sending an automated message indicating an availability of the user device only for the text communication for the duration of the playing of the media; and forwarding the incoming call to a predefined destination. 9. The method of claim 1, wherein the performance of the one or more executable actions on the one or more of the incoming call and the playing of the media on the user device comprises:
configuring the graphical user interface into a configurable number of interface sections by the uninterrupted media play and call management system to allow a recipient of the incoming call to execute the one of the call management options during the playing of the media on the user device; and executing the one of the call management options on one of the interface sections of the graphical user interface by the uninterrupted media play and call management system, and continuing the playing of the media on another one or more of the interface sections of the graphical user interface by the uninterrupted media play and call management system. 10. The method of claim 1, wherein the performance of the one or more executable actions on the one or more of the incoming call and the playing of the media on the user device when the incoming call is accepted comprises:
pausing the playing of the media by the uninterrupted media play and call management system; and recording the media being played for later use in one of the user device, a cloud computing environment, and a combination thereof by the uninterrupted media play and call management system for a duration of the incoming call. 11. The method of claim 1, wherein the media being played comprises an audio component and a video component, and wherein the performance of the one or more executable actions on the one or more of the incoming call and the playing of the media on the user device when the incoming call is accepted comprises reversibly replacing the audio component of the media being played with audio of the incoming call by the uninterrupted media play and call management system, while rendering the video component of the media being played on the graphical user interface for a duration of the incoming call. 12. The method of claim 1, further comprising reversibly configuring the graphical user interface into a plurality of interface sections by the uninterrupted media play and call management system for the playing of up to a predetermined number of the media simultaneously. 13. The method of claim 1, wherein the generated notification object with the one or more of the call management options is overlaid on the graphical user interface by the uninterrupted media play and call management system as one of a translucent display and a hidden display, while supporting the continued playing of the media on the user device via the graphical user interface without the interruption by the incoming call. 14. An electronic device employing an uninterrupted media play and call management system for managing an incoming call during playing of media on the electronic device, without interrupting the playing of the media on the electronic device, the electronic device comprising:
a non-transitory computer readable storage medium configured to store computer program instructions defined by the uninterrupted media play and call management system; at least one processor communicatively coupled to the non-transitory computer readable storage medium, the at least one processor configured to execute the defined computer program instructions; a display screen configured to display a graphical user interface provided by the uninterrupted media play and call management system; and the uninterrupted media play and call management system comprising: a data reception module configured to receive an indication of the incoming call during the playing of the media on the electronic device via the graphical user interface; a notification generation module configured to generate a notification object with one or more of a plurality of call management options for the incoming call in one of a plurality of configurable formats based on preconfigured criteria; a notification overlay module configured to overlay the generated notification object with the one or more of the call management options on the graphical user interface, while supporting continued playing of the media on the electronic device via the graphical user interface without interruption by the incoming call; the data reception module further configured to receive a selection of one of the one or more of the call management options through the overlaid notification object from the electronic device via the graphical user interface and process the received selection of the one of the one or more of the call management options; and an action module configured to perform one or more executable actions on one or more of the incoming call and the playing of the media on the electronic device based on the processed selection of the one of the one or more of the call management options. 15. The electronic device of claim 14, wherein the notification object in the one of the configurable formats is one of a calendar object, a stamp object, and a blinder object. 16. The electronic device of claim 14, wherein the notification generation module is further configured to configure a calendar object to log the incoming call, the playing of the media on the electronic device, and the one or more executable actions performed on the one or more of the incoming call and the playing of the media on the electronic device. 17. The electronic device of claim 16, wherein the notification generation module is further configured to configure the calendar object to log data comprising messages communicated between users, recordings of the media in one of the electronic device, a cloud computing environment, and a combination thereof, ratings of quality of the media, images, and social media, and create and schedule recording of the media and user events. 18. The electronic device of claim 14, wherein the preconfigured criteria for the generation of the notification object with the one or more of the call management options comprise one of blocking the incoming call, allowing only text communication, accepting the incoming call while supporting the continued playing of the media on the electronic device via the graphical user interface without the interruption by the incoming call, and vibrating the electronic device. 19. The electronic device of claim 14, wherein the call management options comprise:
accepting the incoming call; rejecting the incoming call; sending a text communication to a caller of the incoming call; sending a social media communication to a caller of the incoming call; sending an automated message indicating an unavailability of the electronic device for any communication for a duration of the playing of the media; sending an automated message indicating an availability of the electronic device only for the text communication for the duration of the playing of the media; and forwarding the incoming call to a predefined destination. 20. A non-transitory computer readable storage medium having embodied thereon, computer program codes comprising instructions executable by at least one processor for managing an incoming call during playing of media on a user device, without interrupting the playing of the media on the user device, the computer program codes comprising:
a first computer program code for receiving an indication of the incoming call during the playing of the media on the user device via a graphical user interface provided on the user device; a second computer program code for generating a notification object with one or more of a plurality of call management options for the incoming call in one of a plurality of configurable formats based on preconfigured criteria; a third computer program code for overlaying the generated notification object with the one or more of the call management options on the graphical user interface, while supporting continued playing of the media on the user device via the graphical user interface without interruption by the incoming call; a fourth computer program code for receiving a selection of one of the one or more of the call management options through the overlaid notification object from the user device via the graphical user interface and processing the received selection of the one of the one or more of the call management options; and a fifth computer program code for performing one or more executable actions on one or more of the incoming call and the playing of the media on the user device based on the processed selection of the one of the one or more of the call management options. 21. The electronic device of claim 14, wherein the notification object comprises one or more of a plurality of identifiers of a caller of the incoming call, wherein the identifiers comprise a name, a contact number, and an image of the caller. 22. The electronic device of claim 14, wherein the notification object in the one of the configurable formats comprises one or more interface elements in operable communication with the data reception module for receiving the selection of the one of the one or more of the call management options from the electronic device via the graphical user interface. 23. The electronic device of claim 14, wherein the action module is further configured to perform:
configuring the graphical user interface into a configurable number of interface sections to allow a recipient of the incoming call to execute the one of the call management options during the playing of the media on the electronic device; and executing the one of the call management options on one of the interface sections of the graphical user interface, and continuing the playing of the media on another one or more of the interface sections of the graphical user interface. 24. The electronic device of claim 14, wherein the action module is further configured to record the media being played for later use in one of the electronic device, a cloud computing environment, and a combination thereof for a duration of the incoming call. 25. The electronic device of claim 14, wherein the media being played comprises an audio component and a video component, and wherein the action module is further configured to reversibly replace the audio component of the media being played with audio of the incoming call, while rendering the video component of the media being played on the graphical user interface for a duration of the incoming call, when the incoming call is accepted. 26. The electronic device of claim 14, wherein the action module is further configured to reversibly configure the graphical user interface into a plurality of interface sections for the playing of up to a predetermined number of the media simultaneously. 27. The electronic device of claim 14, wherein the notification overlay module is configured to overlay the generated notification object with the one or more of the call management options on the graphical user interface as one of a translucent display and a hidden display, while supporting the continued playing of the media on the electronic device via the graphical user interface without the interruption by the incoming call. 28. The electronic device of claim 14 configured as one of a tablet computing device, a wearable computing device, a mobile phone, and a personal computer. 29. The non-transitory computer readable storage medium of claim 20, wherein the preconfigured criteria for the generation of the notification object with the one or more of the call management options comprise one of blocking the incoming call, allowing only text communication, accepting the incoming call while supporting the continued playing of the media on the user device via the graphical user interface without the interruption by the incoming call, and vibrating the user device. 30. The non-transitory computer readable storage medium of claim 20, wherein the call management options comprise:
accepting the incoming call; rejecting the incoming call; sending a text communication to a caller of the incoming call; sending a social media communication to a caller of the incoming call; sending an automated message indicating an unavailability of the user device for any communication for a duration of the playing of the media; sending an automated message indicating an availability of the user device only for the text communication for the duration of the playing of the media; and forwarding the incoming call to a predefined destination. 31. The non-transitory computer readable storage medium of claim 20, wherein the second computer program code comprises a sixth computer program code for configuring a calendar object to log the incoming call, the playing of the media on the user device, the one or more executable actions performed on the one or more of the incoming call and the playing of the media on the user device, and data comprising messages communicated between users, recordings of the media in one of the user device, a cloud computing environment, and a combination thereof, ratings of quality of the media, images, and social media, and to create and schedule recording of the media and user events. 32. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises:
a seventh computer program code for configuring the graphical user interface into a configurable number of interface sections to allow a recipient of the incoming call to execute the one of the call management options during the playing of the media on the user device; and an eighth computer program code for executing the one of the call management options on one of the interface sections of the graphical user interface, and continuing the playing of the media on another one or more of the interface sections of the graphical user interface. 33. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises a ninth computer program code for recording the media being played for later use in one of the user device, a cloud computing environment, and a combination thereof for a duration of the incoming call. 34. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises a tenth computer program code for reversibly replacing an audio component of the media being played with audio of the incoming call, while rendering the video component of the media being played on the graphical user interface for a duration of the incoming call, when the incoming call is accepted. 35. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises an eleventh computer program code for reversibly configuring the graphical user interface into a plurality of interface sections for the playing of up to a predetermined number of the media simultaneously. | A method and an electronic device employing an uninterrupted media play and call management system (UMPCMS) are provided for managing an incoming call without interrupting playing of media on the electronic device. The UMPCMS receives an indication of the incoming call, generates a notification object with one or more call management options for the incoming call in a configurable format based on preconfigured criteria, and overlays the generated notification object with the call management options on a graphical user interface (GUI) provided by the UMPCMS, while supporting continued playing of the media on the electronic device via the GUI without interruption by the incoming call. The UMPCMS receives a selection of a call management option from the electronic device and performs one or more executable actions on the incoming call and/or the playing of the media on the electronic device based on the received selection of the call management option.1. A method for managing an incoming call during playing of media on a user device, without interrupting the playing of the media on the user device, the method employing an uninterrupted media play and call management system executable by at least one processor configured to execute computer program instructions for performing the method, the method comprising:
receiving an indication of the incoming call by the uninterrupted media play and call management system, during the playing of the media on the user device via a graphical user interface provided by the uninterrupted media play and call management system on the user device; generating a notification object with one or more of a plurality of call management options for the incoming call in one of a plurality of configurable formats based on preconfigured criteria by the uninterrupted media play and call management system; overlaying the generated notification object with the one or more of the call management options on the graphical user interface by the uninterrupted media play and call management system, while supporting continued playing of the media on the user device via the graphical user interface without interruption by the incoming call; receiving a selection of one of the one or more of the call management options through the overlaid notification object from the user device via the graphical user interface and processing the received selection of the one of the one or more of the call management options, by the uninterrupted media play and call management system; and performing one or more executable actions on one or more of the incoming call and the playing of the media on the user device by the uninterrupted media play and call management system based on the processed selection of the one of the one or more of the call management options. 2. The method of claim 1, wherein the notification object comprises one or more of a plurality of identifiers of a caller of the incoming call, wherein the identifiers comprise a name, a contact number, and an image of the caller. 3. The method of claim 1, wherein the notification object in the one of the configurable formats is one of a calendar object, a stamp object, and a blinder object. 4. The method of claim 1, wherein the notification object in the one of the configurable formats comprises one or more interface elements for receiving the selection of the one of the one or more of the call management options from the user device via the graphical user interface. 5. The method of claim 1, wherein the generation of the notification object comprises configuring a calendar object to log the incoming call, the playing of the media on the user device, and the one or more executable actions performed on the one or more of the incoming call and the playing of the media on the user device. 6. The method of claim 5, wherein the calendar object is further configured to log data comprising messages communicated between users, recordings of the media in one of the user device, a cloud computing environment, and a combination thereof, ratings of quality of the media, images, and social media, and create and schedule recording of the media and user events. 7. The method of claim 1, wherein the preconfigured criteria for the generation of the notification object with the one or more of the call management options comprise one of blocking the incoming call, allowing only text communication, accepting the incoming call while supporting the continued playing of the media on the user device via the graphical user interface without the interruption by the incoming call, and vibrating the user device. 8. The method of claim 1, wherein the call management options comprise:
accepting the incoming call; rejecting the incoming call; sending a text communication to a caller of the incoming call; sending a social media communication to a caller of the incoming call; sending an automated message indicating an unavailability of the user device for any communication for a duration of the playing of the media; sending an automated message indicating an availability of the user device only for the text communication for the duration of the playing of the media; and forwarding the incoming call to a predefined destination. 9. The method of claim 1, wherein the performance of the one or more executable actions on the one or more of the incoming call and the playing of the media on the user device comprises:
configuring the graphical user interface into a configurable number of interface sections by the uninterrupted media play and call management system to allow a recipient of the incoming call to execute the one of the call management options during the playing of the media on the user device; and executing the one of the call management options on one of the interface sections of the graphical user interface by the uninterrupted media play and call management system, and continuing the playing of the media on another one or more of the interface sections of the graphical user interface by the uninterrupted media play and call management system. 10. The method of claim 1, wherein the performance of the one or more executable actions on the one or more of the incoming call and the playing of the media on the user device when the incoming call is accepted comprises:
pausing the playing of the media by the uninterrupted media play and call management system; and recording the media being played for later use in one of the user device, a cloud computing environment, and a combination thereof by the uninterrupted media play and call management system for a duration of the incoming call. 11. The method of claim 1, wherein the media being played comprises an audio component and a video component, and wherein the performance of the one or more executable actions on the one or more of the incoming call and the playing of the media on the user device when the incoming call is accepted comprises reversibly replacing the audio component of the media being played with audio of the incoming call by the uninterrupted media play and call management system, while rendering the video component of the media being played on the graphical user interface for a duration of the incoming call. 12. The method of claim 1, further comprising reversibly configuring the graphical user interface into a plurality of interface sections by the uninterrupted media play and call management system for the playing of up to a predetermined number of the media simultaneously. 13. The method of claim 1, wherein the generated notification object with the one or more of the call management options is overlaid on the graphical user interface by the uninterrupted media play and call management system as one of a translucent display and a hidden display, while supporting the continued playing of the media on the user device via the graphical user interface without the interruption by the incoming call. 14. An electronic device employing an uninterrupted media play and call management system for managing an incoming call during playing of media on the electronic device, without interrupting the playing of the media on the electronic device, the electronic device comprising:
a non-transitory computer readable storage medium configured to store computer program instructions defined by the uninterrupted media play and call management system; at least one processor communicatively coupled to the non-transitory computer readable storage medium, the at least one processor configured to execute the defined computer program instructions; a display screen configured to display a graphical user interface provided by the uninterrupted media play and call management system; and the uninterrupted media play and call management system comprising: a data reception module configured to receive an indication of the incoming call during the playing of the media on the electronic device via the graphical user interface; a notification generation module configured to generate a notification object with one or more of a plurality of call management options for the incoming call in one of a plurality of configurable formats based on preconfigured criteria; a notification overlay module configured to overlay the generated notification object with the one or more of the call management options on the graphical user interface, while supporting continued playing of the media on the electronic device via the graphical user interface without interruption by the incoming call; the data reception module further configured to receive a selection of one of the one or more of the call management options through the overlaid notification object from the electronic device via the graphical user interface and process the received selection of the one of the one or more of the call management options; and an action module configured to perform one or more executable actions on one or more of the incoming call and the playing of the media on the electronic device based on the processed selection of the one of the one or more of the call management options. 15. The electronic device of claim 14, wherein the notification object in the one of the configurable formats is one of a calendar object, a stamp object, and a blinder object. 16. The electronic device of claim 14, wherein the notification generation module is further configured to configure a calendar object to log the incoming call, the playing of the media on the electronic device, and the one or more executable actions performed on the one or more of the incoming call and the playing of the media on the electronic device. 17. The electronic device of claim 16, wherein the notification generation module is further configured to configure the calendar object to log data comprising messages communicated between users, recordings of the media in one of the electronic device, a cloud computing environment, and a combination thereof, ratings of quality of the media, images, and social media, and create and schedule recording of the media and user events. 18. The electronic device of claim 14, wherein the preconfigured criteria for the generation of the notification object with the one or more of the call management options comprise one of blocking the incoming call, allowing only text communication, accepting the incoming call while supporting the continued playing of the media on the electronic device via the graphical user interface without the interruption by the incoming call, and vibrating the electronic device. 19. The electronic device of claim 14, wherein the call management options comprise:
accepting the incoming call; rejecting the incoming call; sending a text communication to a caller of the incoming call; sending a social media communication to a caller of the incoming call; sending an automated message indicating an unavailability of the electronic device for any communication for a duration of the playing of the media; sending an automated message indicating an availability of the electronic device only for the text communication for the duration of the playing of the media; and forwarding the incoming call to a predefined destination. 20. A non-transitory computer readable storage medium having embodied thereon, computer program codes comprising instructions executable by at least one processor for managing an incoming call during playing of media on a user device, without interrupting the playing of the media on the user device, the computer program codes comprising:
a first computer program code for receiving an indication of the incoming call during the playing of the media on the user device via a graphical user interface provided on the user device; a second computer program code for generating a notification object with one or more of a plurality of call management options for the incoming call in one of a plurality of configurable formats based on preconfigured criteria; a third computer program code for overlaying the generated notification object with the one or more of the call management options on the graphical user interface, while supporting continued playing of the media on the user device via the graphical user interface without interruption by the incoming call; a fourth computer program code for receiving a selection of one of the one or more of the call management options through the overlaid notification object from the user device via the graphical user interface and processing the received selection of the one of the one or more of the call management options; and a fifth computer program code for performing one or more executable actions on one or more of the incoming call and the playing of the media on the user device based on the processed selection of the one of the one or more of the call management options. 21. The electronic device of claim 14, wherein the notification object comprises one or more of a plurality of identifiers of a caller of the incoming call, wherein the identifiers comprise a name, a contact number, and an image of the caller. 22. The electronic device of claim 14, wherein the notification object in the one of the configurable formats comprises one or more interface elements in operable communication with the data reception module for receiving the selection of the one of the one or more of the call management options from the electronic device via the graphical user interface. 23. The electronic device of claim 14, wherein the action module is further configured to perform:
configuring the graphical user interface into a configurable number of interface sections to allow a recipient of the incoming call to execute the one of the call management options during the playing of the media on the electronic device; and executing the one of the call management options on one of the interface sections of the graphical user interface, and continuing the playing of the media on another one or more of the interface sections of the graphical user interface. 24. The electronic device of claim 14, wherein the action module is further configured to record the media being played for later use in one of the electronic device, a cloud computing environment, and a combination thereof for a duration of the incoming call. 25. The electronic device of claim 14, wherein the media being played comprises an audio component and a video component, and wherein the action module is further configured to reversibly replace the audio component of the media being played with audio of the incoming call, while rendering the video component of the media being played on the graphical user interface for a duration of the incoming call, when the incoming call is accepted. 26. The electronic device of claim 14, wherein the action module is further configured to reversibly configure the graphical user interface into a plurality of interface sections for the playing of up to a predetermined number of the media simultaneously. 27. The electronic device of claim 14, wherein the notification overlay module is configured to overlay the generated notification object with the one or more of the call management options on the graphical user interface as one of a translucent display and a hidden display, while supporting the continued playing of the media on the electronic device via the graphical user interface without the interruption by the incoming call. 28. The electronic device of claim 14 configured as one of a tablet computing device, a wearable computing device, a mobile phone, and a personal computer. 29. The non-transitory computer readable storage medium of claim 20, wherein the preconfigured criteria for the generation of the notification object with the one or more of the call management options comprise one of blocking the incoming call, allowing only text communication, accepting the incoming call while supporting the continued playing of the media on the user device via the graphical user interface without the interruption by the incoming call, and vibrating the user device. 30. The non-transitory computer readable storage medium of claim 20, wherein the call management options comprise:
accepting the incoming call; rejecting the incoming call; sending a text communication to a caller of the incoming call; sending a social media communication to a caller of the incoming call; sending an automated message indicating an unavailability of the user device for any communication for a duration of the playing of the media; sending an automated message indicating an availability of the user device only for the text communication for the duration of the playing of the media; and forwarding the incoming call to a predefined destination. 31. The non-transitory computer readable storage medium of claim 20, wherein the second computer program code comprises a sixth computer program code for configuring a calendar object to log the incoming call, the playing of the media on the user device, the one or more executable actions performed on the one or more of the incoming call and the playing of the media on the user device, and data comprising messages communicated between users, recordings of the media in one of the user device, a cloud computing environment, and a combination thereof, ratings of quality of the media, images, and social media, and to create and schedule recording of the media and user events. 32. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises:
a seventh computer program code for configuring the graphical user interface into a configurable number of interface sections to allow a recipient of the incoming call to execute the one of the call management options during the playing of the media on the user device; and an eighth computer program code for executing the one of the call management options on one of the interface sections of the graphical user interface, and continuing the playing of the media on another one or more of the interface sections of the graphical user interface. 33. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises a ninth computer program code for recording the media being played for later use in one of the user device, a cloud computing environment, and a combination thereof for a duration of the incoming call. 34. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises a tenth computer program code for reversibly replacing an audio component of the media being played with audio of the incoming call, while rendering the video component of the media being played on the graphical user interface for a duration of the incoming call, when the incoming call is accepted. 35. The non-transitory computer readable storage medium of claim 20, wherein the fifth computer program code comprises an eleventh computer program code for reversibly configuring the graphical user interface into a plurality of interface sections for the playing of up to a predetermined number of the media simultaneously. | 2,600 |
9,794 | 9,794 | 14,980,056 | 2,657 | Systems and methods for text normalization in a plurality of noisy channels receive a text entry and channel origin data of the text entry; determine whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry; if the text entry is determined to have a matching IV entry, output the matching IV entry, and if the text entry is determined to be an OOV entry, implement a channel-specific error-type adapter framework based on the channel origin data, wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated; normalize the text entry using the channel-specific error-type adapter framework; and output one or more candidate normalized forms of the text entry. | 1. A method for text normalization in a plurality of noisy channels, performed on a computing device having a processor, memory, and one or more code sets stored in the memory and executing in the processor, the method comprising:
receiving, by the processor, a text entry and channel origin data of the text entry; determining, by the processor, whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry; if the text entry is determined to have a matching IV entry:
outputting, by the processor, the matching IV entry; and
if the text entry is determined to be an OOV entry:
implementing, by the processor, a channel-specific error-type adapter framework based on the channel origin data;
wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated;
normalizing, by the processor, the text entry using the channel-specific error-type adapter framework; and
outputting one or more candidate normalized forms of the text entry. 2. The method of claim 1:
wherein the channel-specific error-type adapter framework comprises one or more error-type adapters; and wherein each error-type adaptor is configured to model a different type of error to be normalized. 3. The method of claim 1, comprising an initial step of building an interpolated language model and one or more static lexicons to be implemented when normalizing the text entry using the channel-specific error-type adapter framework. 4. The method of claim 3, wherein the interpolated language model comprises a merged combination of a basic language model and one or more channel-specific language models. 5. The method of claim 3, wherein the one or more static lexicons comprise at least one of a proper name lexicon, an abbreviation lexicon, and an acronym lexicon. 6. The method of claim 2, wherein the one or more error-type adapters comprise at least one of a spelling error-type adapter, an abbreviation error-type adapter, an acronym error-type adapter, a phonetic shorthand error-type adapter, and a word concatenation error-type adapter. 7. The method of claim 6, wherein the spelling error-type adapter is generated using one or more matrices, each matrix computing a probability of a specific edit operation being performed to identify a spelling error correction. 8. The method of claim 2, wherein the channel-specific error-type adapter framework further comprises a probabilistic model of one or more error-type priors. 9. The method of claim 2, further comprising applying linguistic heuristics to expand the channel-specific error-type adapter framework. 10. A system for text normalization in a plurality of noisy channels, comprising:
a processor; a memory; and one or more code sets stored in the memory and executing in the processor, which, when executed, configure the processor to:
receive a text entry and channel origin data of the text entry;
determine whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry;
if the text entry is determined to have a matching IV entry:
output the matching IV entry; and
if the text entry is determined to be an OOV entry:
implement a channel-specific error-type adapter framework based on the channel origin data;
wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated;
normalize the text entry using the channel-specific error-type adapter framework; and
output one or more candidate normalized forms of the text entry. 11. The system of claim 10:
wherein the channel-specific error-type adapter framework comprises one or more error-type adapters; and wherein each error-type adaptor is configured to model a different type of error to be normalized. 12. The system of claim 10, wherein the one or more code sets further configure the processor to build an interpolated language model and one or more static lexicons to be implemented when normalizing the text entry using the channel-specific error-type adapter framework. 13. The system of claim 12, wherein the interpolated language model comprises a merged combination of a basic language model and one or more channel-specific language models. 14. The system of claim 12, wherein the one or more static lexicons comprise at least one of a proper name lexicon, an abbreviation lexicon, and an acronym lexicon. 15. The system of claim 11, wherein the one or more error-type adapters comprise at least one of a spelling error-type adapter, an abbreviation error-type adapter, an acronym error-type adapter, a phonetic shorthand error-type adapter, and a word concatenation error-type adapter. 16. The system of claim 15, wherein the one or more code sets configure the processor to generate the spelling error-type adapter using one or more matrices, each matrix computing a probability of a specific edit operation being performed to identify a spelling error correction. 17. The system of claim 11, wherein the channel-specific error-type adapter framework further comprises a probabilistic model of one or more error-type priors. 18. The system of claim 11, further comprising applying linguistic heuristics to expand the channel-specific error-type adapter framework. 19. A method for text normalization in a plurality of noisy channels, performed on a computing device having a processor, memory, and one or more code sets stored in the memory and executing in the processor, the method comprising:
receiving, by the processor, a text entry and channel origin data of the text entry; determining, by the processor, whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry;
wherein the matching IV entry is outputted when the text entry is determined to have a matching IV entry,
wherein a channel-specific error-type adapter framework is implemented based on the channel origin data when the text entry is determined to be an OOV entry, and
wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated;
normalizing, by the processor, the text entry using the channel-specific error-type adapter framework; and outputting one or more candidate normalized forms of the text entry. 20. The method of claim 19:
wherein the channel-specific error-type adapter framework comprises one or more error-type adapters; and wherein each error-type adaptor is configured to model a different type of error to be normalized. | Systems and methods for text normalization in a plurality of noisy channels receive a text entry and channel origin data of the text entry; determine whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry; if the text entry is determined to have a matching IV entry, output the matching IV entry, and if the text entry is determined to be an OOV entry, implement a channel-specific error-type adapter framework based on the channel origin data, wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated; normalize the text entry using the channel-specific error-type adapter framework; and output one or more candidate normalized forms of the text entry.1. A method for text normalization in a plurality of noisy channels, performed on a computing device having a processor, memory, and one or more code sets stored in the memory and executing in the processor, the method comprising:
receiving, by the processor, a text entry and channel origin data of the text entry; determining, by the processor, whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry; if the text entry is determined to have a matching IV entry:
outputting, by the processor, the matching IV entry; and
if the text entry is determined to be an OOV entry:
implementing, by the processor, a channel-specific error-type adapter framework based on the channel origin data;
wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated;
normalizing, by the processor, the text entry using the channel-specific error-type adapter framework; and
outputting one or more candidate normalized forms of the text entry. 2. The method of claim 1:
wherein the channel-specific error-type adapter framework comprises one or more error-type adapters; and wherein each error-type adaptor is configured to model a different type of error to be normalized. 3. The method of claim 1, comprising an initial step of building an interpolated language model and one or more static lexicons to be implemented when normalizing the text entry using the channel-specific error-type adapter framework. 4. The method of claim 3, wherein the interpolated language model comprises a merged combination of a basic language model and one or more channel-specific language models. 5. The method of claim 3, wherein the one or more static lexicons comprise at least one of a proper name lexicon, an abbreviation lexicon, and an acronym lexicon. 6. The method of claim 2, wherein the one or more error-type adapters comprise at least one of a spelling error-type adapter, an abbreviation error-type adapter, an acronym error-type adapter, a phonetic shorthand error-type adapter, and a word concatenation error-type adapter. 7. The method of claim 6, wherein the spelling error-type adapter is generated using one or more matrices, each matrix computing a probability of a specific edit operation being performed to identify a spelling error correction. 8. The method of claim 2, wherein the channel-specific error-type adapter framework further comprises a probabilistic model of one or more error-type priors. 9. The method of claim 2, further comprising applying linguistic heuristics to expand the channel-specific error-type adapter framework. 10. A system for text normalization in a plurality of noisy channels, comprising:
a processor; a memory; and one or more code sets stored in the memory and executing in the processor, which, when executed, configure the processor to:
receive a text entry and channel origin data of the text entry;
determine whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry;
if the text entry is determined to have a matching IV entry:
output the matching IV entry; and
if the text entry is determined to be an OOV entry:
implement a channel-specific error-type adapter framework based on the channel origin data;
wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated;
normalize the text entry using the channel-specific error-type adapter framework; and
output one or more candidate normalized forms of the text entry. 11. The system of claim 10:
wherein the channel-specific error-type adapter framework comprises one or more error-type adapters; and wherein each error-type adaptor is configured to model a different type of error to be normalized. 12. The system of claim 10, wherein the one or more code sets further configure the processor to build an interpolated language model and one or more static lexicons to be implemented when normalizing the text entry using the channel-specific error-type adapter framework. 13. The system of claim 12, wherein the interpolated language model comprises a merged combination of a basic language model and one or more channel-specific language models. 14. The system of claim 12, wherein the one or more static lexicons comprise at least one of a proper name lexicon, an abbreviation lexicon, and an acronym lexicon. 15. The system of claim 11, wherein the one or more error-type adapters comprise at least one of a spelling error-type adapter, an abbreviation error-type adapter, an acronym error-type adapter, a phonetic shorthand error-type adapter, and a word concatenation error-type adapter. 16. The system of claim 15, wherein the one or more code sets configure the processor to generate the spelling error-type adapter using one or more matrices, each matrix computing a probability of a specific edit operation being performed to identify a spelling error correction. 17. The system of claim 11, wherein the channel-specific error-type adapter framework further comprises a probabilistic model of one or more error-type priors. 18. The system of claim 11, further comprising applying linguistic heuristics to expand the channel-specific error-type adapter framework. 19. A method for text normalization in a plurality of noisy channels, performed on a computing device having a processor, memory, and one or more code sets stored in the memory and executing in the processor, the method comprising:
receiving, by the processor, a text entry and channel origin data of the text entry; determining, by the processor, whether the text entry matches an in-vocabulary (IV) entry or whether the text entry is an out-of-vocabulary (OOV) entry;
wherein the matching IV entry is outputted when the text entry is determined to have a matching IV entry,
wherein a channel-specific error-type adapter framework is implemented based on the channel origin data when the text entry is determined to be an OOV entry, and
wherein the channel-specific error-type adapter framework is optimized for a specific channel from which the text entry originated;
normalizing, by the processor, the text entry using the channel-specific error-type adapter framework; and outputting one or more candidate normalized forms of the text entry. 20. The method of claim 19:
wherein the channel-specific error-type adapter framework comprises one or more error-type adapters; and wherein each error-type adaptor is configured to model a different type of error to be normalized. | 2,600 |
9,795 | 9,795 | 14,836,067 | 2,677 | Mechanisms are provided for implementing a logical reasoning and justification engine that operates to receive a logical parse data structure of natural language content. The logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content. The engine further operates to receive a selection of a node in the logical parse data structure to thereby form a selected node, and execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node. The engine further operates to generate a logical justification output based on the identified zero or more justifying nodes, and output the logical justification output. | 1. A method, in a data processing system comprising a processor and a memory comprising instructions which, when executed by the processor, cause the processor to implement a logical reasoning and justification engine, the method being executed by the logical reasoning and justification engine to:
receive a logical parse data structure of natural language content, wherein the logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content; receive a selection of a node in the logical parse data structure to thereby form a selected node; execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node; generate a logical justification output based on the identified zero or more justifying nodes; and output the logical justification output. 2. The method of claim 1, wherein each justification module in the at least one justification module is configured to reverse engineer a corresponding knowledge reasoner used to propagate knowledge when generating the logical parse data structure. 3. The method of claim 2, wherein each justification module executes a different set of logical justification operations depending on a unique set of reasoning and propagation rules implemented by the corresponding knowledge reasoner. 4. The method of claim 2, wherein the logical parse data structure is generated at least by applying one or more of an evidential support reasoner, a relevance reasoner, or a co-reference reasoner to the nodes of the logical parse data structure to propagate knowledge throughout the logical parse data structure, and wherein the at least one justification module comprises at least one of an evidential support logical justification module, a relevance logical justification module, or a co-reference logical justification module. 5. The method of claim 1, wherein executing the at least one logical justification module on the selected node comprises executing, in a first iteration, the at least one logical justification module on the selected node and then, in at least one subsequent iteration, executing the at least one logical justification module on justifying nodes identified in a previous iteration of the execution of the at least one logical justification module until no new justifying nodes are identified. 6. The method of claim 1, wherein generating the logical justification output based on the identified zero or more justifying nodes comprises extracting facts from the zero or more justifying nodes and utilizing the facts to compose a factual statement as to a justification for the knowledge state of the selected node. 7. The method of claim 1, wherein the at least one logical justification module comprises an evidential support logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first knowledge contributions from zero or more child nodes of the selected node in accordance with an evidential support reasoner propagation rules; collecting second knowledge contributions from a parent node of the selected node in accordance with the evidential support reasoner propagation rules; collecting sideways propagation knowledge contributions from the parent node and zero or more sibling nodes of the selected node; and combining the first, second, and sideways propagation knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 8. The method of claim 1, wherein the at least one logical justification module comprises a co-reference logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
determining one or more nodes across the logical parse data structure that have a logical or semantic match to the selected node to thereby generate a set of candidate nodes; quantifying, for each candidate node, a strength of a match between the candidate node and the selected node to thereby generate a set of matching nodes; determining, for each matching node, a knowledge contribution from the matching node to the selected node; selecting a set of unique matching nodes to be part of the set of justifying nodes; and generating a set of justification facts for justifying the knowledge state of the selected node based on facts associated with the set of unique matching nodes. 9. The method of claim 8, wherein determining a knowledge contribution from the matching node to the selected node comprises determining a transfer of a maximum truth or falsity value from the matching node to the selected node. 10. The method of claim 1, wherein the at least one logical justification module comprises a relevance reasoner logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first relevance knowledge contributions from a direct parent node of the selected node such that a relevance metric of the selected node can only be decreased if a relevance metric of the parent node is smaller; collecting second relevance knowledge contributions from one or more direct child nodes of the selected node such that the relevance metric of the selected node can only be decreased if a maximum relevance metric of the one or more child nodes is smaller; and combining the first and second relevance knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to implement a logical reasoning and justification engine that operates to:
receive a logical parse data structure of natural language content, wherein the logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content; receive a selection of a node in the logical parse data structure to thereby form a selected node; execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node; generate a logical justification output based on the identified zero or more justifying nodes; and output the logical justification output. 12. The computer program product of claim 11, wherein each justification module in the at least one justification module is configured to reverse engineer a corresponding knowledge reasoner used to propagate knowledge when generating the logical parse data structure. 13. The computer program product of claim 12, wherein the logical parse data structure is generated at least by applying one or more of an evidential support reasoner, a relevance reasoner, or a co-reference reasoner to the nodes of the logical parse data structure to propagate knowledge throughout the logical parse data structure, and wherein the at least one justification module comprises at least one of an evidential support logical justification module, a relevance logical justification module, or a co-reference logical justification module. 14. The computer program product of claim 11, wherein executing the at least one logical justification module on the selected node comprises executing, in a first iteration, the at least one logical justification module on the selected node and then, in at least one subsequent iteration, executing the at least one logical justification module on justifying nodes identified in a previous iteration of the execution of the at least one logical justification module until no new justifying nodes are identified. 15. The computer program product of claim 11, wherein generating the logical justification output based on the identified zero or more justifying nodes comprises extracting facts from the zero or more justifying nodes and utilizing the facts to compose a factual statement as to a justification for the knowledge state of the selected node. 16. The computer program product of claim 11, wherein the at least one logical justification module comprises an evidential support logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first knowledge contributions from zero or more child nodes of the selected node in accordance with an evidential support reasoner propagation rules; collecting second knowledge contributions from a parent node of the selected node in accordance with the evidential support reasoner propagation rules; collecting sideways propagation knowledge contributions from the parent node and zero or more sibling nodes of the selected node; and combining the first, second, and sideways propagation knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 17. The computer program product of claim 11, wherein the at least one logical justification module comprises a co-reference logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
determining one or more nodes across the logical parse data structure that have a logical or semantic match to the selected node to thereby generate a set of candidate nodes; quantifying, for each candidate node, a strength of a match between the candidate node and the selected node to thereby generate a set of matching nodes; determining, for each matching node, a knowledge contribution from the matching node to the selected node; selecting a set of unique matching nodes to be part of the set of justifying nodes; and generating a set of justification facts for justifying the knowledge state of the selected node based on facts associated with the set of unique matching nodes. 18. The computer program product of claim 17, wherein determining a knowledge contribution from the matching node to the selected node comprises determining a transfer of a maximum truth or falsity value from the matching node to the selected node. 19. The computer program product of claim 11, wherein the at least one logical justification module comprises a relevance reasoner logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first relevance knowledge contributions from a direct parent node of the selected node such that a relevance metric of the selected node can only be decreased if a relevance metric of the parent node is smaller; collecting second relevance knowledge contributions from one or more direct child nodes of the selected node such that the relevance metric of the selected node can only be decreased if a maximum relevance metric of the one or more child nodes is smaller; and combining the first and second relevance knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 20. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: receive a logical parse data structure of natural language content, wherein the logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content; receive a selection of a node in the logical parse data structure to thereby form a selected node; execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node; generate a logical justification output based on the identified zero or more justifying nodes; and output the logical justification output. | Mechanisms are provided for implementing a logical reasoning and justification engine that operates to receive a logical parse data structure of natural language content. The logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content. The engine further operates to receive a selection of a node in the logical parse data structure to thereby form a selected node, and execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node. The engine further operates to generate a logical justification output based on the identified zero or more justifying nodes, and output the logical justification output.1. A method, in a data processing system comprising a processor and a memory comprising instructions which, when executed by the processor, cause the processor to implement a logical reasoning and justification engine, the method being executed by the logical reasoning and justification engine to:
receive a logical parse data structure of natural language content, wherein the logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content; receive a selection of a node in the logical parse data structure to thereby form a selected node; execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node; generate a logical justification output based on the identified zero or more justifying nodes; and output the logical justification output. 2. The method of claim 1, wherein each justification module in the at least one justification module is configured to reverse engineer a corresponding knowledge reasoner used to propagate knowledge when generating the logical parse data structure. 3. The method of claim 2, wherein each justification module executes a different set of logical justification operations depending on a unique set of reasoning and propagation rules implemented by the corresponding knowledge reasoner. 4. The method of claim 2, wherein the logical parse data structure is generated at least by applying one or more of an evidential support reasoner, a relevance reasoner, or a co-reference reasoner to the nodes of the logical parse data structure to propagate knowledge throughout the logical parse data structure, and wherein the at least one justification module comprises at least one of an evidential support logical justification module, a relevance logical justification module, or a co-reference logical justification module. 5. The method of claim 1, wherein executing the at least one logical justification module on the selected node comprises executing, in a first iteration, the at least one logical justification module on the selected node and then, in at least one subsequent iteration, executing the at least one logical justification module on justifying nodes identified in a previous iteration of the execution of the at least one logical justification module until no new justifying nodes are identified. 6. The method of claim 1, wherein generating the logical justification output based on the identified zero or more justifying nodes comprises extracting facts from the zero or more justifying nodes and utilizing the facts to compose a factual statement as to a justification for the knowledge state of the selected node. 7. The method of claim 1, wherein the at least one logical justification module comprises an evidential support logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first knowledge contributions from zero or more child nodes of the selected node in accordance with an evidential support reasoner propagation rules; collecting second knowledge contributions from a parent node of the selected node in accordance with the evidential support reasoner propagation rules; collecting sideways propagation knowledge contributions from the parent node and zero or more sibling nodes of the selected node; and combining the first, second, and sideways propagation knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 8. The method of claim 1, wherein the at least one logical justification module comprises a co-reference logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
determining one or more nodes across the logical parse data structure that have a logical or semantic match to the selected node to thereby generate a set of candidate nodes; quantifying, for each candidate node, a strength of a match between the candidate node and the selected node to thereby generate a set of matching nodes; determining, for each matching node, a knowledge contribution from the matching node to the selected node; selecting a set of unique matching nodes to be part of the set of justifying nodes; and generating a set of justification facts for justifying the knowledge state of the selected node based on facts associated with the set of unique matching nodes. 9. The method of claim 8, wherein determining a knowledge contribution from the matching node to the selected node comprises determining a transfer of a maximum truth or falsity value from the matching node to the selected node. 10. The method of claim 1, wherein the at least one logical justification module comprises a relevance reasoner logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first relevance knowledge contributions from a direct parent node of the selected node such that a relevance metric of the selected node can only be decreased if a relevance metric of the parent node is smaller; collecting second relevance knowledge contributions from one or more direct child nodes of the selected node such that the relevance metric of the selected node can only be decreased if a maximum relevance metric of the one or more child nodes is smaller; and combining the first and second relevance knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to implement a logical reasoning and justification engine that operates to:
receive a logical parse data structure of natural language content, wherein the logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content; receive a selection of a node in the logical parse data structure to thereby form a selected node; execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node; generate a logical justification output based on the identified zero or more justifying nodes; and output the logical justification output. 12. The computer program product of claim 11, wherein each justification module in the at least one justification module is configured to reverse engineer a corresponding knowledge reasoner used to propagate knowledge when generating the logical parse data structure. 13. The computer program product of claim 12, wherein the logical parse data structure is generated at least by applying one or more of an evidential support reasoner, a relevance reasoner, or a co-reference reasoner to the nodes of the logical parse data structure to propagate knowledge throughout the logical parse data structure, and wherein the at least one justification module comprises at least one of an evidential support logical justification module, a relevance logical justification module, or a co-reference logical justification module. 14. The computer program product of claim 11, wherein executing the at least one logical justification module on the selected node comprises executing, in a first iteration, the at least one logical justification module on the selected node and then, in at least one subsequent iteration, executing the at least one logical justification module on justifying nodes identified in a previous iteration of the execution of the at least one logical justification module until no new justifying nodes are identified. 15. The computer program product of claim 11, wherein generating the logical justification output based on the identified zero or more justifying nodes comprises extracting facts from the zero or more justifying nodes and utilizing the facts to compose a factual statement as to a justification for the knowledge state of the selected node. 16. The computer program product of claim 11, wherein the at least one logical justification module comprises an evidential support logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first knowledge contributions from zero or more child nodes of the selected node in accordance with an evidential support reasoner propagation rules; collecting second knowledge contributions from a parent node of the selected node in accordance with the evidential support reasoner propagation rules; collecting sideways propagation knowledge contributions from the parent node and zero or more sibling nodes of the selected node; and combining the first, second, and sideways propagation knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 17. The computer program product of claim 11, wherein the at least one logical justification module comprises a co-reference logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
determining one or more nodes across the logical parse data structure that have a logical or semantic match to the selected node to thereby generate a set of candidate nodes; quantifying, for each candidate node, a strength of a match between the candidate node and the selected node to thereby generate a set of matching nodes; determining, for each matching node, a knowledge contribution from the matching node to the selected node; selecting a set of unique matching nodes to be part of the set of justifying nodes; and generating a set of justification facts for justifying the knowledge state of the selected node based on facts associated with the set of unique matching nodes. 18. The computer program product of claim 17, wherein determining a knowledge contribution from the matching node to the selected node comprises determining a transfer of a maximum truth or falsity value from the matching node to the selected node. 19. The computer program product of claim 11, wherein the at least one logical justification module comprises a relevance reasoner logical justification module, and wherein executing at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node comprises:
collecting first relevance knowledge contributions from a direct parent node of the selected node such that a relevance metric of the selected node can only be decreased if a relevance metric of the parent node is smaller; collecting second relevance knowledge contributions from one or more direct child nodes of the selected node such that the relevance metric of the selected node can only be decreased if a maximum relevance metric of the one or more child nodes is smaller; and combining the first and second relevance knowledge contributions to generate a set of justification facts for justifying the knowledge state of the selected node. 20. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: receive a logical parse data structure of natural language content, wherein the logical parse data structure comprises nodes and edges linking nodes and identifies latent logical terms within the natural language content indicative of logical relationships between elements of the natural language content; receive a selection of a node in the logical parse data structure to thereby form a selected node; execute at least one logical justification module on the selected node to identify zero or more justifying nodes that provide a contribution to a knowledge state of the selected node; generate a logical justification output based on the identified zero or more justifying nodes; and output the logical justification output. | 2,600 |
9,796 | 9,796 | 15,263,889 | 2,659 | Mechanisms are provided for processing natural language content. The mechanisms receive natural language content and analyze the natural language content to generate a parse tree data structure. The mechanisms process the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content. The hypothetical spans are terms or phrases indicative of a hypothetical statement. The mechanisms perform an operation based on the natural language content. The operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. | 1. A method, in a data processing system comprising at least one processor and at least one memory, the at least one memory comprising instructions which are executed by the at least one processor and specifically configure the processor to perform the method, wherein the method comprises:
receiving, by the data processing system, natural language content; analyzing, by the data processing system, the natural language content to generate a parse tree data structure; processing, by the data processing system, the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content, wherein hypothetical spans are terms or phrases indicative of a hypothetical statement; and performing, by the data processing system, an operation based on the natural language content, wherein the operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. 2. The method of claim 1, wherein processing the parse tree to identify one or more instances of hypothetical span comprises:
identifying a hypothetical trigger within the parse tree data structure; and annotating the natural language content signifying the content within the hypothetical span to be associated with the hypothetical trigger. 3. The method of claim 1, further comprising:
removing, by the data processing system, one or more sub-tree data structures of the parse tree data structure that correspond to the one or more instances of hypothetical spans, to thereby generate a hypothetical pruned parse tree data structure, wherein the operation is performed based on the hypothetical pruned parse tree data structure. 4. The method of claim 1, wherein the performing the operation comprises:
training, by the data processing system, a model of a natural language processing system based on the identification of the one or more instances of hypothetical spans in the natural language content; and performing, by the natural language processing system, natural language processing of natural language content based on the trained model. 5. The method of claim 2, wherein processing the parse tree data structure further comprises, for each instance of a hypothetical trigger found in the parse tree data structure:
analyzing the hypothetical trigger using a dictionary data structure to determine a part-of-speech attribute of the hypothetical trigger; and utilizing the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement. 6. The method of claim 5, wherein utilizing the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement comprises:
generating a tuple representation of a sub-tree data structure corresponding to the hypothetical trigger; retrieving, from the dictionary data structure, one or more dictionary definitions of a term present in the hypothetical trigger; and determining a part-of-speech attribute of the hypothetical trigger based on a correlation of the tuple representation of the sub-tree data structure with the one or more dictionary definitions. 7. The method of claim 6, wherein, in response to the part-of-speech attribute indicating that the hypothetical trigger is a noun, the sub-tree data structure corresponding to the hypothetical trigger is determined to not be directed to a hypothetical statement. 8. The method of claim 1, wherein the natural language processing system is a medical treatment recommendation system, and wherein the operation comprises generating treatment recommendations based on content of a patient electronic medical record. 9. The method of claim 1, wherein processing the parse tree data structure further comprises processing the parse tree data structure to identify instances of factual triggers, wherein factual triggers are terms or phrases indicative of a factual statement. 10. The method of claim 9, further comprising:
determining if a factual sub-tree is present within a hypothetical sub-tree; and in response to the factual sub-tree being present within a hypothetical sub-tree, removing the factual sub-tree from the hypothetical sub-tree to generate a modified hypothetical sub-tree prior to further processing of the modified hypothetical sub-tree. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, specifically configures the computing device, and causes the computing device, to:
receive natural language content; analyze the natural language content to generate a parse tree data structure; process the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content, wherein hypothetical spans are terms or phrases indicative of a hypothetical statement; and perform an operation based on the natural language content, wherein the operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. 12. The computer program product of claim 11, wherein the computer readable program further causes the computing device to process the parse tree to identify one or more instances of hypothetical span at least by:
identifying a hypothetical trigger within the parse tree data structure; and annotating the natural language content signifying the content within the hypothetical span to be associated with the hypothetical trigger. 13. The computer program product of claim 11, wherein the computer readable program further causes the computing device to:
remove one or more sub-tree data structures of the parse tree data structure that correspond to the one or more instances of hypothetical spans, to thereby generate a hypothetical pruned parse tree data structure, wherein the operation is performed based on the hypothetical pruned parse tree data structure. 14. The computer program product of claim 11, wherein the computer readable program further causes the computing device to perform the operation at least by:
training a model of a natural language processing system based on the identification of the one or more instances of hypothetical spans in the natural language content; and performing natural language processing of natural language content based on the trained model. 15. The computer program product of claim 12, wherein the computer readable program further causes the computing device to process the parse tree data structure at least by, for each instance of a hypothetical trigger found in the parse tree data structure:
analyzing the hypothetical trigger using a dictionary data structure to determine a part-of-speech attribute of the hypothetical trigger; and utilizing the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement. 16. The computer program product of claim 15, wherein the computer readable program further causes the computing device to utilize the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement at least by:
generating a tuple representation of a sub-tree data structure corresponding to the hypothetical trigger; retrieving, from the dictionary data structure, one or more dictionary definitions of a term present in the hypothetical trigger; and determining a part-of-speech attribute of the hypothetical trigger based on a correlation of the tuple representation of the sub-tree data structure with the one or more dictionary definitions. 17. The computer program product of claim 16, wherein, in response to the part-of-speech attribute indicating that the hypothetical trigger is a noun, the sub-tree data structure corresponding to the hypothetical trigger is determined to not be directed to a hypothetical statement. 18. The computer program product of claim 11, wherein the natural language processing system is a medical treatment recommendation system, and wherein the cognitive operation comprises generating treatment recommendations based on content of a patient electronic medical record. 19. The computer program product of claim 11, wherein the computer readable program further causes the computing device to process the parse tree data structure at least by:
processing the parse tree data structure to identify instances of factual triggers, wherein factual triggers are terms or phrases indicative of a factual statement; determining if a factual sub-tree is present within a hypothetical sub-tree; and in response to the factual sub-tree being present within a hypothetical sub-tree, removing the factual sub-tree from the hypothetical sub-tree to generate a modified hypothetical sub-tree prior to further processing of the modified hypothetical sub-tree. 20. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, specifically configures the processor and causes the processor to: receive natural language content; analyze the natural language content to generate a parse tree data structure; process the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content, wherein hypothetical spans are terms or phrases indicative of a hypothetical statement; and perform an operation based on the natural language content, wherein the operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. | Mechanisms are provided for processing natural language content. The mechanisms receive natural language content and analyze the natural language content to generate a parse tree data structure. The mechanisms process the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content. The hypothetical spans are terms or phrases indicative of a hypothetical statement. The mechanisms perform an operation based on the natural language content. The operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content.1. A method, in a data processing system comprising at least one processor and at least one memory, the at least one memory comprising instructions which are executed by the at least one processor and specifically configure the processor to perform the method, wherein the method comprises:
receiving, by the data processing system, natural language content; analyzing, by the data processing system, the natural language content to generate a parse tree data structure; processing, by the data processing system, the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content, wherein hypothetical spans are terms or phrases indicative of a hypothetical statement; and performing, by the data processing system, an operation based on the natural language content, wherein the operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. 2. The method of claim 1, wherein processing the parse tree to identify one or more instances of hypothetical span comprises:
identifying a hypothetical trigger within the parse tree data structure; and annotating the natural language content signifying the content within the hypothetical span to be associated with the hypothetical trigger. 3. The method of claim 1, further comprising:
removing, by the data processing system, one or more sub-tree data structures of the parse tree data structure that correspond to the one or more instances of hypothetical spans, to thereby generate a hypothetical pruned parse tree data structure, wherein the operation is performed based on the hypothetical pruned parse tree data structure. 4. The method of claim 1, wherein the performing the operation comprises:
training, by the data processing system, a model of a natural language processing system based on the identification of the one or more instances of hypothetical spans in the natural language content; and performing, by the natural language processing system, natural language processing of natural language content based on the trained model. 5. The method of claim 2, wherein processing the parse tree data structure further comprises, for each instance of a hypothetical trigger found in the parse tree data structure:
analyzing the hypothetical trigger using a dictionary data structure to determine a part-of-speech attribute of the hypothetical trigger; and utilizing the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement. 6. The method of claim 5, wherein utilizing the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement comprises:
generating a tuple representation of a sub-tree data structure corresponding to the hypothetical trigger; retrieving, from the dictionary data structure, one or more dictionary definitions of a term present in the hypothetical trigger; and determining a part-of-speech attribute of the hypothetical trigger based on a correlation of the tuple representation of the sub-tree data structure with the one or more dictionary definitions. 7. The method of claim 6, wherein, in response to the part-of-speech attribute indicating that the hypothetical trigger is a noun, the sub-tree data structure corresponding to the hypothetical trigger is determined to not be directed to a hypothetical statement. 8. The method of claim 1, wherein the natural language processing system is a medical treatment recommendation system, and wherein the operation comprises generating treatment recommendations based on content of a patient electronic medical record. 9. The method of claim 1, wherein processing the parse tree data structure further comprises processing the parse tree data structure to identify instances of factual triggers, wherein factual triggers are terms or phrases indicative of a factual statement. 10. The method of claim 9, further comprising:
determining if a factual sub-tree is present within a hypothetical sub-tree; and in response to the factual sub-tree being present within a hypothetical sub-tree, removing the factual sub-tree from the hypothetical sub-tree to generate a modified hypothetical sub-tree prior to further processing of the modified hypothetical sub-tree. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, specifically configures the computing device, and causes the computing device, to:
receive natural language content; analyze the natural language content to generate a parse tree data structure; process the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content, wherein hypothetical spans are terms or phrases indicative of a hypothetical statement; and perform an operation based on the natural language content, wherein the operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. 12. The computer program product of claim 11, wherein the computer readable program further causes the computing device to process the parse tree to identify one or more instances of hypothetical span at least by:
identifying a hypothetical trigger within the parse tree data structure; and annotating the natural language content signifying the content within the hypothetical span to be associated with the hypothetical trigger. 13. The computer program product of claim 11, wherein the computer readable program further causes the computing device to:
remove one or more sub-tree data structures of the parse tree data structure that correspond to the one or more instances of hypothetical spans, to thereby generate a hypothetical pruned parse tree data structure, wherein the operation is performed based on the hypothetical pruned parse tree data structure. 14. The computer program product of claim 11, wherein the computer readable program further causes the computing device to perform the operation at least by:
training a model of a natural language processing system based on the identification of the one or more instances of hypothetical spans in the natural language content; and performing natural language processing of natural language content based on the trained model. 15. The computer program product of claim 12, wherein the computer readable program further causes the computing device to process the parse tree data structure at least by, for each instance of a hypothetical trigger found in the parse tree data structure:
analyzing the hypothetical trigger using a dictionary data structure to determine a part-of-speech attribute of the hypothetical trigger; and utilizing the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement. 16. The computer program product of claim 15, wherein the computer readable program further causes the computing device to utilize the determined part-of-speech attribute to determine a measure of whether or not the hypothetical trigger corresponds to a hypothetical statement at least by:
generating a tuple representation of a sub-tree data structure corresponding to the hypothetical trigger; retrieving, from the dictionary data structure, one or more dictionary definitions of a term present in the hypothetical trigger; and determining a part-of-speech attribute of the hypothetical trigger based on a correlation of the tuple representation of the sub-tree data structure with the one or more dictionary definitions. 17. The computer program product of claim 16, wherein, in response to the part-of-speech attribute indicating that the hypothetical trigger is a noun, the sub-tree data structure corresponding to the hypothetical trigger is determined to not be directed to a hypothetical statement. 18. The computer program product of claim 11, wherein the natural language processing system is a medical treatment recommendation system, and wherein the cognitive operation comprises generating treatment recommendations based on content of a patient electronic medical record. 19. The computer program product of claim 11, wherein the computer readable program further causes the computing device to process the parse tree data structure at least by:
processing the parse tree data structure to identify instances of factual triggers, wherein factual triggers are terms or phrases indicative of a factual statement; determining if a factual sub-tree is present within a hypothetical sub-tree; and in response to the factual sub-tree being present within a hypothetical sub-tree, removing the factual sub-tree from the hypothetical sub-tree to generate a modified hypothetical sub-tree prior to further processing of the modified hypothetical sub-tree. 20. An apparatus comprising:
a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, specifically configures the processor and causes the processor to: receive natural language content; analyze the natural language content to generate a parse tree data structure; process the parse tree data structure to identify one or more instances of hypothetical spans in the natural language content, wherein hypothetical spans are terms or phrases indicative of a hypothetical statement; and perform an operation based on the natural language content, wherein the operation is performed with portions of the natural language content corresponding to the one or more identified instances of hypothetical spans being given different relative weights within portions of the natural language content than other portions of the natural language content. | 2,600 |
9,797 | 9,797 | 14,888,940 | 2,641 | A method for cell selection and/or cell reselection handling includes: performing, by a user equipment, a cell selection and/or cell reselection procedure wherein both a first radio cell and a second radio cell fulfill a cell selection criterion with the first radio cell being prioritized relative to the second radio cell; attempting, by the user equipment, to connect to a public land mobile network using a random access channel of a first base station entity, and failing to connect to the public land mobile network using the random access channel of the first base station entity; and performing, by the user equipment, a modified cell selection and/or cell reselection procedure, under unchanged radio conditions, whereby the user equipment attempts to connect to the public land mobile network using a random access channel of the second base station entity. | 1. A method for cell selection and/or cell reselection handling by a user equipment attempting to connect to a public land mobile network using a random access channel of a base station entity, wherein the public land mobile network comprises a first radio cell with a first base station entity and a second radio cell with a second base station entity, comprising:
performing, by the user equipment, a cell selection and/or cell reselection procedure wherein both the first radio cell and the second radio cell fulfill a cell selection criterion with, the first radio cell being prioritized relative to the second radio cell; attempting, by the user equipment, to connect to the public land mobile network using a random access channel of the first base station entity, and failing to connect to the public land mobile network using the random access channel of the first base station entity; and performing, by the user equipment, a modified cell selection and/or cell reselection procedure, under unchanged radio conditions regarding the first base station entity, the second base station entity, and the user equipment, whereby the user equipment attempts to connect to the public land mobile network using a random access channel of the second base station entity. 2. The method according to claim 1, wherein performing the modified cell selection and/or cell reselection procedure further comprises:
applying a penalty information with respect to the first radio cell so as to lower a priority of the first radio cell relative to a priority of the second radio cell. 3. The method according to claim 2, wherein the penalty information comprises an offset information, wherein applying the penalty information reduces at least one of the following values with respect to the first radio cell:
the cell selection RX level value (Srxlev), the cell selection quality value (Squal), the measured cell RX level value (RSRP), the measured cell quality value (RSRQ). 4. The method according to claim 2, wherein the penalty information comprises a penalty-related timer information, wherein the penalty information is applied during a penalty time interval indicated by the penalty-related timer information, and wherein the penalty time interval starts after the user equipment fails to connect to the public land mobile network using the random access channel of the first base station entity. 5. The method according to claim 4, wherein after the expiration of the penalty time interval, the cell selection and/or cell reselection procedure is conducted by the user equipment without application of the penalty information. 6. The method according to claim 2, wherein the penalty information is transmitted from the first base station entity to the user equipment. 7. The method according to claim 2, wherein the penalty information is stored in the user equipment. 8. A public land mobile network that facilitates cell selection and/or cell reselection handling for a user equipment attempting to connect to a public land mobile network using a random access channel of a base station entity, wherein the public land mobile network comprises:
a first radio cell with a first base station entity; and a second radio cell with a second base station entity; wherein the public land mobile network is configured to facilitate transmission of a penalty information with respect to at least the first radio cell to the user equipment, wherein the penalty information causes the user equipment to apply a modified cell selection and/or cell reselection procedure in case of: failure of the user equipment attempting to connect to the public land mobile network using the random access channel of the first base station entity based on a previous cell selection and/or cell reselection procedure performed by the user equipment where both the first radio cell and the second radio cell fulfill a cell selection criterion, with the first radio cell being prioritized relative to the second radio cell; wherein the modified cell selection and/or cell reselection procedure comprises the user equipment attempting to connect to the public land mobile network using a random access channel of the second base station entity under unchanged radio conditions with respect to the first base station entity, the second base station entity, and the user equipment. 9. A user equipment for attempting to connect to a public land mobile network comprising a first radio cell with a first base station entity and a second radio cell with a second base station entity using a random access channel of the first base station entity and/or of the second base station entity, wherein the user equipment comprises: a processor and a memory, wherein the processor, based on execution of instructions stored in the memory, is configured for:
performing a cell selection and/or cell reselection procedure wherein both the first radio cell and the second radio cell fulfill a cell selection criterion with the first radio cell being, prioritized relative to the second radio cell: attempting to connect to the public land mobile network using the random access channel of the first base station entity, and failing to connect to the public land mobile network using, the random access channel of the first base station entity; and performing a modified cell selection and/or cell reselection procedure, under unchanged radio conditions regarding the first base station entity, the second base station entity, and the user equipment, whereby the user equipment attempts to connect to the public land mobile network using the random access channel of the second base station entity. 10. The user equipment according to claim 9, wherein the processor is further configured to receive penalty information from the first base station entity. 11. The user equipment according to claim 9, wherein the memory is configured to store penalty information. 12. A non-transitory processor-readable medium having processor-executable instructions stored thereon for selection and/or cell selection and/or cell reselection handling by a user equipment attempting to connect to a public land mobile network using a random access channel of a base station entity, wherein the public land mobile network comprises a first radio cell with a first base station entity and a second radio cell with a second base station entity, the processor-executable instructions, when executed by a processor, facilitating performance of the following:
performing, by the user equipment, a cell selection and/or cell reselection procedure wherein both the first radio cell and the second radio cell fulfill a cell selection criterion with the first radio cell being prioritized relative to the second radio cell; attempting, by the user equipment, to connect to the public land mobile network using a random access channel of the first base station entity, and failing to connect to the public land mobile network using the random access channel of the first base station entity; and performing, by the user equipment, a modified cell selection and/or cell reselection procedure, under unchanged radio conditions regarding the first base station entity, the second base station entity, and the user equipment, whereby the user equipment attempts to connect to the public land mobile network using a random access channel of the second base station entity. 13. (canceled) 14. The method according to claim 6, wherein the penalty information is transmitted from the first base station entity to the user equipment on a broadcast channel. 15. The method according to claim 7, wherein the penalty information is stored in a firmware-associated memory of the user equipment. | A method for cell selection and/or cell reselection handling includes: performing, by a user equipment, a cell selection and/or cell reselection procedure wherein both a first radio cell and a second radio cell fulfill a cell selection criterion with the first radio cell being prioritized relative to the second radio cell; attempting, by the user equipment, to connect to a public land mobile network using a random access channel of a first base station entity, and failing to connect to the public land mobile network using the random access channel of the first base station entity; and performing, by the user equipment, a modified cell selection and/or cell reselection procedure, under unchanged radio conditions, whereby the user equipment attempts to connect to the public land mobile network using a random access channel of the second base station entity.1. A method for cell selection and/or cell reselection handling by a user equipment attempting to connect to a public land mobile network using a random access channel of a base station entity, wherein the public land mobile network comprises a first radio cell with a first base station entity and a second radio cell with a second base station entity, comprising:
performing, by the user equipment, a cell selection and/or cell reselection procedure wherein both the first radio cell and the second radio cell fulfill a cell selection criterion with, the first radio cell being prioritized relative to the second radio cell; attempting, by the user equipment, to connect to the public land mobile network using a random access channel of the first base station entity, and failing to connect to the public land mobile network using the random access channel of the first base station entity; and performing, by the user equipment, a modified cell selection and/or cell reselection procedure, under unchanged radio conditions regarding the first base station entity, the second base station entity, and the user equipment, whereby the user equipment attempts to connect to the public land mobile network using a random access channel of the second base station entity. 2. The method according to claim 1, wherein performing the modified cell selection and/or cell reselection procedure further comprises:
applying a penalty information with respect to the first radio cell so as to lower a priority of the first radio cell relative to a priority of the second radio cell. 3. The method according to claim 2, wherein the penalty information comprises an offset information, wherein applying the penalty information reduces at least one of the following values with respect to the first radio cell:
the cell selection RX level value (Srxlev), the cell selection quality value (Squal), the measured cell RX level value (RSRP), the measured cell quality value (RSRQ). 4. The method according to claim 2, wherein the penalty information comprises a penalty-related timer information, wherein the penalty information is applied during a penalty time interval indicated by the penalty-related timer information, and wherein the penalty time interval starts after the user equipment fails to connect to the public land mobile network using the random access channel of the first base station entity. 5. The method according to claim 4, wherein after the expiration of the penalty time interval, the cell selection and/or cell reselection procedure is conducted by the user equipment without application of the penalty information. 6. The method according to claim 2, wherein the penalty information is transmitted from the first base station entity to the user equipment. 7. The method according to claim 2, wherein the penalty information is stored in the user equipment. 8. A public land mobile network that facilitates cell selection and/or cell reselection handling for a user equipment attempting to connect to a public land mobile network using a random access channel of a base station entity, wherein the public land mobile network comprises:
a first radio cell with a first base station entity; and a second radio cell with a second base station entity; wherein the public land mobile network is configured to facilitate transmission of a penalty information with respect to at least the first radio cell to the user equipment, wherein the penalty information causes the user equipment to apply a modified cell selection and/or cell reselection procedure in case of: failure of the user equipment attempting to connect to the public land mobile network using the random access channel of the first base station entity based on a previous cell selection and/or cell reselection procedure performed by the user equipment where both the first radio cell and the second radio cell fulfill a cell selection criterion, with the first radio cell being prioritized relative to the second radio cell; wherein the modified cell selection and/or cell reselection procedure comprises the user equipment attempting to connect to the public land mobile network using a random access channel of the second base station entity under unchanged radio conditions with respect to the first base station entity, the second base station entity, and the user equipment. 9. A user equipment for attempting to connect to a public land mobile network comprising a first radio cell with a first base station entity and a second radio cell with a second base station entity using a random access channel of the first base station entity and/or of the second base station entity, wherein the user equipment comprises: a processor and a memory, wherein the processor, based on execution of instructions stored in the memory, is configured for:
performing a cell selection and/or cell reselection procedure wherein both the first radio cell and the second radio cell fulfill a cell selection criterion with the first radio cell being, prioritized relative to the second radio cell: attempting to connect to the public land mobile network using the random access channel of the first base station entity, and failing to connect to the public land mobile network using, the random access channel of the first base station entity; and performing a modified cell selection and/or cell reselection procedure, under unchanged radio conditions regarding the first base station entity, the second base station entity, and the user equipment, whereby the user equipment attempts to connect to the public land mobile network using the random access channel of the second base station entity. 10. The user equipment according to claim 9, wherein the processor is further configured to receive penalty information from the first base station entity. 11. The user equipment according to claim 9, wherein the memory is configured to store penalty information. 12. A non-transitory processor-readable medium having processor-executable instructions stored thereon for selection and/or cell selection and/or cell reselection handling by a user equipment attempting to connect to a public land mobile network using a random access channel of a base station entity, wherein the public land mobile network comprises a first radio cell with a first base station entity and a second radio cell with a second base station entity, the processor-executable instructions, when executed by a processor, facilitating performance of the following:
performing, by the user equipment, a cell selection and/or cell reselection procedure wherein both the first radio cell and the second radio cell fulfill a cell selection criterion with the first radio cell being prioritized relative to the second radio cell; attempting, by the user equipment, to connect to the public land mobile network using a random access channel of the first base station entity, and failing to connect to the public land mobile network using the random access channel of the first base station entity; and performing, by the user equipment, a modified cell selection and/or cell reselection procedure, under unchanged radio conditions regarding the first base station entity, the second base station entity, and the user equipment, whereby the user equipment attempts to connect to the public land mobile network using a random access channel of the second base station entity. 13. (canceled) 14. The method according to claim 6, wherein the penalty information is transmitted from the first base station entity to the user equipment on a broadcast channel. 15. The method according to claim 7, wherein the penalty information is stored in a firmware-associated memory of the user equipment. | 2,600 |
9,798 | 9,798 | 12,691,992 | 2,683 | Embodiments of the present invention provide a single platform that provides controller functionality for each of security, monitoring and automation, as well as providing a capacity to function as a bidirectional Internet gateway. Embodiments of the present invention provide such functionality by virtue of a configurable architecture that enables a user to adapt the system for the user's specific needs. Embodiments of the present invention further provide for a software-based installation workflow to activate and provision the controller and associated sensors and network. | 1. A computer-implemented method for configuring a security, monitoring and automation (SMA) system, said method comprising:
configuring a network in a domain comprising an SMA controller, wherein said configuring is performed using the SMA controller; configuring one or more of security sensors, monitoring devices, and home area network devices having an automation interface to communicate with the SMA controller, wherein said configuring is performed using the SMA controller; and testing a communication path between the SMA controller and a remote server. 2. The method of claim 1 further comprising:
executing a script by the SMA controller, wherein
the script guides a user of the SMA controller through said configuring the network, said configuring the one or more of security sensors, monitoring devices and home area network devices, and said testing the alarm communication path. 3. The method of claim 2 wherein said executing the script displays a series of user interfaces on a display coupled to the SMA controller. 4. The method of claim 1 further comprising:
configuring one or more zones, wherein each zone is associated with a security sensor of the one or more security sensors; and testing an alarm communication path. 5. The method of claim 4 wherein the alarm communication path comprises:
a link between a security sensor of the one or more security sensors and the SMA controller; and a network link between the SMA controller and the remote server. 6. The method of claim 5 wherein the alarm communication path further comprises a link between the remote server and an alarm central station. 7. The method of claim 5 wherein said testing the alarm communication path further comprises:
receiving an sensor fault event signal from the security sensor by the SMA controller; and transmitting information related to the sensor fault event signal to the remote server in response to said receiving the sensor fault event signal when the SMA controller is in an armed state, wherein
said transmitting is performed by the SMA controller, and
said transmitting is performed using the network link. 8. The method of claim 7 wherein the information related to the sensor fault event signal comprises an identifier of the security sensor. 9. The method of claim 4 wherein said configuring the one or more security sensors to communicate with the SMA controller comprises:
searching for each of the one or more security sensors by the SMA controller; displaying, on a display coupled to the SMA controller, information corresponding to each found security sensor of the one or more security sensors; and pairing each found sensor. 10. The method of claim 4 wherein the one or more security sensors communicate wirelessly with the SMA controller. 11. The method of claim 4 wherein said configuring the one or more zones comprises:
selecting a zone of the one or more zones; and editing information associated with the selected zone. 12. The method of claim 1 wherein said configuring the network in the domain comprising the SMA controller comprises:
locating a network router in the domain; securing the network router; and creating a secure network, wherein
the secure network comprises the SMA controller and the network router, and
said securing the network router and creating the secure network are performed in response to commands transmitted from the SMA controller to the network router. 13. The method of claim 12 wherein said configuring the network further comprises:
performing a connectivity test between the SMA controller and the remote server, wherein
said performing the connectivity test is performed using the secure network. 14. The method of claim 12 wherein the secure network is a WiFi network. 15. The method of claim 12 wherein said configuring the network further comprises:
locating a cellular network for which the SMA controller is provisioned; and performing a connectivity test between the SMA controller and the remote server, wherein
said performing the connectivity test is performed using the cellular network. 16. The method of claim 1 wherein said configuring the network in the domain comprising the SMA controller comprises:
locating a power-line network adapter; and securing the power-line network adapter, wherein
said securing the power-line network adapter is performed in response to commands transmitted from the SMA controller to the power-line network adapter. 17. A device comprising:
one or more communication interfaces, each for communication with one or more of security sensors, monitoring devices, and home area network devices having an automation interface, respectively; a network communication interface for communication with a network router in a domain comprising the device and the network router; a processor, coupled to the one or more communication interfaces, network communication interface and a memory, and configured to execute instructions stored in the memory; and the memory storing instructions configured to provide a workflow for activating the device, wherein the workflow comprises
configuring a network in the domain using the network communication interface, wherein
said configuring is performed by the device in communication with the network router,
configuring one or more of the security sensors, the monitoring devices, and the home area network devices having an automation interface to communicate with the device using the respective communication interfaces; and
testing a communication path between the device and a remote server using the network communication interface. 18. The device of claim 17 wherein the instructions configured to provide the workflow comprise instructions for executing a script, wherein the script guides a user of the device through said configuring the network, said configuring the one or more of the security sensors, monitoring devices and home area network devices, and said testing the alarm communication path. 19. The device of claim 18 further comprising:
a display, coupled to the processor, and configured to display a series of user interfaces in response to said executing the script. 20. The device of claim 17 wherein the instructions configured to provide the workflow comprise instructions for:
configuring one or more zones, wherein
each zone is associated with a security sensor of the one or more security sensors, and
testing an alarm communication path using the network communication interface. 21. The device of claim 20 wherein the alarm communication path comprises:
a link between a security sensor of the one or more security sensors and the device; a network link between the device and the network router; a network link between the network router and a remote server; and a link between the remote server and an alarm central station. 22. The device of claim 20 further comprising:
a display coupled to the processor; and wherein the workflow for said configuring the one or more security sensors to communicate with the device further comprises
searching for each of the one or more security sensors,
displaying, on the display, information corresponding to each found security sensor of the one or more security sensors, and
pairing each found sensor. 23. The device of claim 20 wherein the workflow for said configuring the one or more zones further comprises:
selecting a zone of the one or more zones; and editing information associated with the selected zone. 24. The device of claim 17 wherein said workflow for configuring the network in the domain further comprises:
locating the network router; securing the network router; and creating a secure network, wherein
the secure network comprises the device and the network router, and
said securing the network router and creating the secure network are performed in response to commands transmitted from the device to the network router. 25. The device of claim 24 further comprising:
a cellular communication interface for communication with a cellular network for which the device is provisioned; and wherein said workflow for configuring the network further comprises locating the cellular network, and
performing a connectivity test between the device and a remote server, wherein
said performing the connectivity test is performed using the cellular network. 26. The device of claim 17 wherein said workflow for configuring the network in the domain further comprises:
locating a power-line network adapter; and securing the power-line network adapter, wherein
said securing the power-line network adapter is performed in response to commands transmitted from the device to the power-line network adapter. | Embodiments of the present invention provide a single platform that provides controller functionality for each of security, monitoring and automation, as well as providing a capacity to function as a bidirectional Internet gateway. Embodiments of the present invention provide such functionality by virtue of a configurable architecture that enables a user to adapt the system for the user's specific needs. Embodiments of the present invention further provide for a software-based installation workflow to activate and provision the controller and associated sensors and network.1. A computer-implemented method for configuring a security, monitoring and automation (SMA) system, said method comprising:
configuring a network in a domain comprising an SMA controller, wherein said configuring is performed using the SMA controller; configuring one or more of security sensors, monitoring devices, and home area network devices having an automation interface to communicate with the SMA controller, wherein said configuring is performed using the SMA controller; and testing a communication path between the SMA controller and a remote server. 2. The method of claim 1 further comprising:
executing a script by the SMA controller, wherein
the script guides a user of the SMA controller through said configuring the network, said configuring the one or more of security sensors, monitoring devices and home area network devices, and said testing the alarm communication path. 3. The method of claim 2 wherein said executing the script displays a series of user interfaces on a display coupled to the SMA controller. 4. The method of claim 1 further comprising:
configuring one or more zones, wherein each zone is associated with a security sensor of the one or more security sensors; and testing an alarm communication path. 5. The method of claim 4 wherein the alarm communication path comprises:
a link between a security sensor of the one or more security sensors and the SMA controller; and a network link between the SMA controller and the remote server. 6. The method of claim 5 wherein the alarm communication path further comprises a link between the remote server and an alarm central station. 7. The method of claim 5 wherein said testing the alarm communication path further comprises:
receiving an sensor fault event signal from the security sensor by the SMA controller; and transmitting information related to the sensor fault event signal to the remote server in response to said receiving the sensor fault event signal when the SMA controller is in an armed state, wherein
said transmitting is performed by the SMA controller, and
said transmitting is performed using the network link. 8. The method of claim 7 wherein the information related to the sensor fault event signal comprises an identifier of the security sensor. 9. The method of claim 4 wherein said configuring the one or more security sensors to communicate with the SMA controller comprises:
searching for each of the one or more security sensors by the SMA controller; displaying, on a display coupled to the SMA controller, information corresponding to each found security sensor of the one or more security sensors; and pairing each found sensor. 10. The method of claim 4 wherein the one or more security sensors communicate wirelessly with the SMA controller. 11. The method of claim 4 wherein said configuring the one or more zones comprises:
selecting a zone of the one or more zones; and editing information associated with the selected zone. 12. The method of claim 1 wherein said configuring the network in the domain comprising the SMA controller comprises:
locating a network router in the domain; securing the network router; and creating a secure network, wherein
the secure network comprises the SMA controller and the network router, and
said securing the network router and creating the secure network are performed in response to commands transmitted from the SMA controller to the network router. 13. The method of claim 12 wherein said configuring the network further comprises:
performing a connectivity test between the SMA controller and the remote server, wherein
said performing the connectivity test is performed using the secure network. 14. The method of claim 12 wherein the secure network is a WiFi network. 15. The method of claim 12 wherein said configuring the network further comprises:
locating a cellular network for which the SMA controller is provisioned; and performing a connectivity test between the SMA controller and the remote server, wherein
said performing the connectivity test is performed using the cellular network. 16. The method of claim 1 wherein said configuring the network in the domain comprising the SMA controller comprises:
locating a power-line network adapter; and securing the power-line network adapter, wherein
said securing the power-line network adapter is performed in response to commands transmitted from the SMA controller to the power-line network adapter. 17. A device comprising:
one or more communication interfaces, each for communication with one or more of security sensors, monitoring devices, and home area network devices having an automation interface, respectively; a network communication interface for communication with a network router in a domain comprising the device and the network router; a processor, coupled to the one or more communication interfaces, network communication interface and a memory, and configured to execute instructions stored in the memory; and the memory storing instructions configured to provide a workflow for activating the device, wherein the workflow comprises
configuring a network in the domain using the network communication interface, wherein
said configuring is performed by the device in communication with the network router,
configuring one or more of the security sensors, the monitoring devices, and the home area network devices having an automation interface to communicate with the device using the respective communication interfaces; and
testing a communication path between the device and a remote server using the network communication interface. 18. The device of claim 17 wherein the instructions configured to provide the workflow comprise instructions for executing a script, wherein the script guides a user of the device through said configuring the network, said configuring the one or more of the security sensors, monitoring devices and home area network devices, and said testing the alarm communication path. 19. The device of claim 18 further comprising:
a display, coupled to the processor, and configured to display a series of user interfaces in response to said executing the script. 20. The device of claim 17 wherein the instructions configured to provide the workflow comprise instructions for:
configuring one or more zones, wherein
each zone is associated with a security sensor of the one or more security sensors, and
testing an alarm communication path using the network communication interface. 21. The device of claim 20 wherein the alarm communication path comprises:
a link between a security sensor of the one or more security sensors and the device; a network link between the device and the network router; a network link between the network router and a remote server; and a link between the remote server and an alarm central station. 22. The device of claim 20 further comprising:
a display coupled to the processor; and wherein the workflow for said configuring the one or more security sensors to communicate with the device further comprises
searching for each of the one or more security sensors,
displaying, on the display, information corresponding to each found security sensor of the one or more security sensors, and
pairing each found sensor. 23. The device of claim 20 wherein the workflow for said configuring the one or more zones further comprises:
selecting a zone of the one or more zones; and editing information associated with the selected zone. 24. The device of claim 17 wherein said workflow for configuring the network in the domain further comprises:
locating the network router; securing the network router; and creating a secure network, wherein
the secure network comprises the device and the network router, and
said securing the network router and creating the secure network are performed in response to commands transmitted from the device to the network router. 25. The device of claim 24 further comprising:
a cellular communication interface for communication with a cellular network for which the device is provisioned; and wherein said workflow for configuring the network further comprises locating the cellular network, and
performing a connectivity test between the device and a remote server, wherein
said performing the connectivity test is performed using the cellular network. 26. The device of claim 17 wherein said workflow for configuring the network in the domain further comprises:
locating a power-line network adapter; and securing the power-line network adapter, wherein
said securing the power-line network adapter is performed in response to commands transmitted from the device to the power-line network adapter. | 2,600 |
9,799 | 9,799 | 15,785,104 | 2,646 | A method, apparatus and system for extended wireless communication include an airborne platform including at least one antenna to pick up and radiate wireless signals, a platform controller to control the altitude and attitude of the airborne platform, and a communication payload. In an embodiment, the communication payload includes at least two transponders to establish wireless links and a controller having a processor and a memory coupled to the processor. In some embodiments, the memory has stored therein instructions executable by the processor to cause the airborne communication system to elevate the airborne platform to an altitude at which wireless connectivity is able to be established with a first wireless network, establish a first wireless link to the first wireless network, establish a second wireless link, and relay data between the first wireless link and the second wireless link. | 1. A method for providing extended wireless communications, comprising:
providing an airborne platform having a communication payload able to establish at least a first and a second wireless link, and a flight control system able to alter at least one of a position or an attitude of the airborne platform; elevating the airborne platform using the flight control system to an altitude at which wireless connectivity is able to be established with a first wireless network; establishing a first wireless link to the first wireless network using the communication payload of the airborne platform; establishing a second wireless link using the communication payload of the airborne platform; relaying data between the first wireless link and the second wireless link using the communication payload; and altering at least one of the position or the attitude of the airborne platform using the flight control system to locate a position or an attitude for the airborne platform having at least one of an optimum signal strength or coverage area for at least one of the first and the second wireless links. 2. The method of claim 1, wherein establishing a second wireless link comprises establishing a wireless link to a second wireless network and providing an access point to the second wireless network. 3. The method of claim 1, wherein establishing a second wireless link comprises connecting to external communication equipment. 4. The method of claim 1, wherein at least one of the first and second wireless links are established using at least a first directional radio antenna. 5. The method of claim 4, wherein the first directional antenna is positioned on a gimbal mount and operable to change its orientation. 6. The method of claim 4, wherein the first directional antenna is integrated into an airframe of the airborne platform. 7. The method of claim 4, wherein the first and second wireless links are established using the first directional radio antenna and a second directional antenna and wherein the second directional antenna is oriented in a different direction from that of the first directional antenna. 8. The method of claim 1, wherein the communication payload comprises a microcontroller, a first wireless transponder and a second wireless transponder and wherein the first wireless transponder and the second wireless transponder are connected and controlled by the microcontroller. 9. (canceled) 10. The method of claim 1, wherein the flight control system comprises navigation, guidance and control modules. 11. The method of claim 1, wherein providing the first and second wireless links for the airborne platform comprises providing a first wireless transponder and a second wireless transponder and wherein the first wireless transponder and the second wireless transponder are connected and operated by the flight control system. 12. The method of claim 1, wherein the airborne platform is an autonomous airborne platform. 13. The method of claim 12, further comprising optimizing at least one of the first and second wireless links by using the flight control system to automatically seek an optimum platform position and an attitude. 14. The method of claim 13, wherein the optimizing at least one of the first and second wireless links comprises at least one of increasing received and transmitted signal strength in at least one of the first and second wireless links, reducing interference with external wireless links, switching at least one of the first and second wireless links to a stronger wireless link, increasing area coverage for at least one of the first and second wireless links, and creating a flight plan around the optimum position. 15. The method of claim 13, wherein the seeking an optimum platform position and an attitude comprises at least one of rising above an obstacle, moving the platform horizontally, and changing the platform's orientation. 16. The method of claim 1, wherein one of the first and second wireless links is one of radio, optical and acoustic link. 17. The method of claim 16, wherein the radio link is one of WiFi, GSM and LTE link. 18. The method of claim 16, wherein the radio link is provided by a software defined radio. 19. The method of claim 16, wherein the radio link is one of analog radio and digital radio. 20. The method of claim 16, wherein the optical link is provided by an optical transponder. 21. The method of claim 16, wherein the acoustic link is provided by one of a microphone and an acoustic speaker. 22. The method of claim 1, wherein one of the first and second wireless link is configured for broadcasting. 23. The method of claim 1, wherein the airborne platform is one of a fixed-wing plane, a rotorcraft, a vertical take-off and landing aircraft, a lighter-than-air aircraft and a kite. 24. The method of claim 1, further providing a third wireless link. 25. The method of claim 1, further providing a wired link. 26. An apparatus for providing extended wireless communications, comprising:
a first controller able to elevate an airborne platform on which a communication payload exists to an altitude at which wireless connectivity is able to be established with a first wireless network; a communication payload comprising:
at least two transponders to establish wireless links;
and a second controller having a processor and a memory coupled to the processor, the memory having stored therein instructions executable by the processor to cause the apparatus to:
establish a first wireless link to the first wireless network;
establish a second wireless link;
relay data between the first wireless link and the second wireless link using the communication payload; and
alter at least one of a position or an attitude of the airborne platform using the first controller to locate a position or an attitude for the airborne platform having at least one of an optimum signal strength or coverage area for at least one of the first and the second wireless links. 27. The apparatus of claim 26, wherein the communication payload communicates a signal to the first controller of the airborne platform to elevate the airborne platform. 28. An airborne communication system for providing extended wireless communications, comprising:
an airborne platform including:
at least one antenna to pick up and radiate wireless signals;
a platform controller to control an altitude and attitude of the airborne platform; and
a communication payload, wherein the communication payload comprises:
at least two transponders to establish wireless links; and
a controller having a processor and a memory coupled to the processor, the memory having stored therein instructions executable by the processor to cause the airborne communication system to:
elevate the airborne platform to an altitude at which wireless connectivity is able to be established with a first wireless network;
establish a first wireless link to the first wireless network;
establish a second wireless link;
relay data between the first wireless link and the second wireless link; and
alter at least one of the attitude or the altitude of the airborne platform using the platform controller to locate an attitude or an altitude for the airborne platform having at least one of an optimum signal strength or coverage area for at least one of the first and the second wireless links. | A method, apparatus and system for extended wireless communication include an airborne platform including at least one antenna to pick up and radiate wireless signals, a platform controller to control the altitude and attitude of the airborne platform, and a communication payload. In an embodiment, the communication payload includes at least two transponders to establish wireless links and a controller having a processor and a memory coupled to the processor. In some embodiments, the memory has stored therein instructions executable by the processor to cause the airborne communication system to elevate the airborne platform to an altitude at which wireless connectivity is able to be established with a first wireless network, establish a first wireless link to the first wireless network, establish a second wireless link, and relay data between the first wireless link and the second wireless link.1. A method for providing extended wireless communications, comprising:
providing an airborne platform having a communication payload able to establish at least a first and a second wireless link, and a flight control system able to alter at least one of a position or an attitude of the airborne platform; elevating the airborne platform using the flight control system to an altitude at which wireless connectivity is able to be established with a first wireless network; establishing a first wireless link to the first wireless network using the communication payload of the airborne platform; establishing a second wireless link using the communication payload of the airborne platform; relaying data between the first wireless link and the second wireless link using the communication payload; and altering at least one of the position or the attitude of the airborne platform using the flight control system to locate a position or an attitude for the airborne platform having at least one of an optimum signal strength or coverage area for at least one of the first and the second wireless links. 2. The method of claim 1, wherein establishing a second wireless link comprises establishing a wireless link to a second wireless network and providing an access point to the second wireless network. 3. The method of claim 1, wherein establishing a second wireless link comprises connecting to external communication equipment. 4. The method of claim 1, wherein at least one of the first and second wireless links are established using at least a first directional radio antenna. 5. The method of claim 4, wherein the first directional antenna is positioned on a gimbal mount and operable to change its orientation. 6. The method of claim 4, wherein the first directional antenna is integrated into an airframe of the airborne platform. 7. The method of claim 4, wherein the first and second wireless links are established using the first directional radio antenna and a second directional antenna and wherein the second directional antenna is oriented in a different direction from that of the first directional antenna. 8. The method of claim 1, wherein the communication payload comprises a microcontroller, a first wireless transponder and a second wireless transponder and wherein the first wireless transponder and the second wireless transponder are connected and controlled by the microcontroller. 9. (canceled) 10. The method of claim 1, wherein the flight control system comprises navigation, guidance and control modules. 11. The method of claim 1, wherein providing the first and second wireless links for the airborne platform comprises providing a first wireless transponder and a second wireless transponder and wherein the first wireless transponder and the second wireless transponder are connected and operated by the flight control system. 12. The method of claim 1, wherein the airborne platform is an autonomous airborne platform. 13. The method of claim 12, further comprising optimizing at least one of the first and second wireless links by using the flight control system to automatically seek an optimum platform position and an attitude. 14. The method of claim 13, wherein the optimizing at least one of the first and second wireless links comprises at least one of increasing received and transmitted signal strength in at least one of the first and second wireless links, reducing interference with external wireless links, switching at least one of the first and second wireless links to a stronger wireless link, increasing area coverage for at least one of the first and second wireless links, and creating a flight plan around the optimum position. 15. The method of claim 13, wherein the seeking an optimum platform position and an attitude comprises at least one of rising above an obstacle, moving the platform horizontally, and changing the platform's orientation. 16. The method of claim 1, wherein one of the first and second wireless links is one of radio, optical and acoustic link. 17. The method of claim 16, wherein the radio link is one of WiFi, GSM and LTE link. 18. The method of claim 16, wherein the radio link is provided by a software defined radio. 19. The method of claim 16, wherein the radio link is one of analog radio and digital radio. 20. The method of claim 16, wherein the optical link is provided by an optical transponder. 21. The method of claim 16, wherein the acoustic link is provided by one of a microphone and an acoustic speaker. 22. The method of claim 1, wherein one of the first and second wireless link is configured for broadcasting. 23. The method of claim 1, wherein the airborne platform is one of a fixed-wing plane, a rotorcraft, a vertical take-off and landing aircraft, a lighter-than-air aircraft and a kite. 24. The method of claim 1, further providing a third wireless link. 25. The method of claim 1, further providing a wired link. 26. An apparatus for providing extended wireless communications, comprising:
a first controller able to elevate an airborne platform on which a communication payload exists to an altitude at which wireless connectivity is able to be established with a first wireless network; a communication payload comprising:
at least two transponders to establish wireless links;
and a second controller having a processor and a memory coupled to the processor, the memory having stored therein instructions executable by the processor to cause the apparatus to:
establish a first wireless link to the first wireless network;
establish a second wireless link;
relay data between the first wireless link and the second wireless link using the communication payload; and
alter at least one of a position or an attitude of the airborne platform using the first controller to locate a position or an attitude for the airborne platform having at least one of an optimum signal strength or coverage area for at least one of the first and the second wireless links. 27. The apparatus of claim 26, wherein the communication payload communicates a signal to the first controller of the airborne platform to elevate the airborne platform. 28. An airborne communication system for providing extended wireless communications, comprising:
an airborne platform including:
at least one antenna to pick up and radiate wireless signals;
a platform controller to control an altitude and attitude of the airborne platform; and
a communication payload, wherein the communication payload comprises:
at least two transponders to establish wireless links; and
a controller having a processor and a memory coupled to the processor, the memory having stored therein instructions executable by the processor to cause the airborne communication system to:
elevate the airborne platform to an altitude at which wireless connectivity is able to be established with a first wireless network;
establish a first wireless link to the first wireless network;
establish a second wireless link;
relay data between the first wireless link and the second wireless link; and
alter at least one of the attitude or the altitude of the airborne platform using the platform controller to locate an attitude or an altitude for the airborne platform having at least one of an optimum signal strength or coverage area for at least one of the first and the second wireless links. | 2,600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.