Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,200
10,200
15,527,543
2,685
The present technology relates to a communication device, a communication method, and a program that enable improvement in security of electric field communication. Biological information about a user is detected in accordance with an action of the user, and electric field communication being performed by an electric field communication unit is controlled in accordance with the biological information. The present technology can be applied to communication devices that perform electric field communication using an electric field, such as intra-body communication using the human body as a communication medium.
1. A communication device comprising: an electric field communication unit configured to perform electric field communication using an electric field; a sensor configured to detect biological information about the user, in accordance with an action of a user; and a control exit configured to control the electric field communication being performed by the electric field communication unit, in accordance with the biological information. 2. The communication device according to claim 1, wherein the control unit conducts authentication of the user using the biological information, and, in a case where the authentication of the user is successful, the control unit causes the electric field communication unit to start the electric field communication. 3. The communication device according to claim 2, wherein the sensor detects an electromyographic waveform. 4. The communication device according to claim 3, wherein the sensor detects an electrocardiographic waveform. 5. The communication device according to claim 1, wherein: the sensor further detects movement of the user; and the control unit controls the electric field communication being performed by the electric field communication unit, in accordance with the biological information and the movement of the user. 6. The communication device according to claim 1, wherein, in accordance with a signal from a device on the other end of communication, the control unit prompts the user to act to cause the sensor to detect the biological information about the user. 7. The communication device according to claim 1, wherein the electric field communication unit performs intra-body communication as the electric field communication, with a human body being a communication medium. 8. The communication device according to claim 1, which is a wearable device. 9. A communication method implemented by a communication device, the communication device including: an electric field communication unit configured to perform electric field communication using an electric field; and a sensor configured to detect biological information about the user, in accordance with an action of a user, the communication method comprising the step of controlling the electric field communication being performed by the electric field communication unit, in accordance with the biological information. 10. A program to be executed by a computer that controls a communication device, the communication device including: an electric field communication unit configured to perform electric field communication using an electric field; and a sensor configured to detect biological information about the user, in accordance with an action of a user, the program causing the computer to carry out the step of controlling the electric field communication being performed by the electric field communication unit, in accordance with the biological information.
The present technology relates to a communication device, a communication method, and a program that enable improvement in security of electric field communication. Biological information about a user is detected in accordance with an action of the user, and electric field communication being performed by an electric field communication unit is controlled in accordance with the biological information. The present technology can be applied to communication devices that perform electric field communication using an electric field, such as intra-body communication using the human body as a communication medium.1. A communication device comprising: an electric field communication unit configured to perform electric field communication using an electric field; a sensor configured to detect biological information about the user, in accordance with an action of a user; and a control exit configured to control the electric field communication being performed by the electric field communication unit, in accordance with the biological information. 2. The communication device according to claim 1, wherein the control unit conducts authentication of the user using the biological information, and, in a case where the authentication of the user is successful, the control unit causes the electric field communication unit to start the electric field communication. 3. The communication device according to claim 2, wherein the sensor detects an electromyographic waveform. 4. The communication device according to claim 3, wherein the sensor detects an electrocardiographic waveform. 5. The communication device according to claim 1, wherein: the sensor further detects movement of the user; and the control unit controls the electric field communication being performed by the electric field communication unit, in accordance with the biological information and the movement of the user. 6. The communication device according to claim 1, wherein, in accordance with a signal from a device on the other end of communication, the control unit prompts the user to act to cause the sensor to detect the biological information about the user. 7. The communication device according to claim 1, wherein the electric field communication unit performs intra-body communication as the electric field communication, with a human body being a communication medium. 8. The communication device according to claim 1, which is a wearable device. 9. A communication method implemented by a communication device, the communication device including: an electric field communication unit configured to perform electric field communication using an electric field; and a sensor configured to detect biological information about the user, in accordance with an action of a user, the communication method comprising the step of controlling the electric field communication being performed by the electric field communication unit, in accordance with the biological information. 10. A program to be executed by a computer that controls a communication device, the communication device including: an electric field communication unit configured to perform electric field communication using an electric field; and a sensor configured to detect biological information about the user, in accordance with an action of a user, the program causing the computer to carry out the step of controlling the electric field communication being performed by the electric field communication unit, in accordance with the biological information.
2,600
10,201
10,201
15,201,352
2,627
Disclosed herein is a lens for a wearable projection system. The lens includes a holographic optical element disposed between layers of the lens. Joints between the holographic optical element and the lens layers on an edge of the lens are covered with a sealant to protect the holographic optical element.
1. A method to manufacture a wearable display lens, comprising: providing a lens having a holographic optical element (HOE) disposed between a first layer and a second layer of the lens, the HOE exposed along an edge of the lens; shaping the lens; and applying a sealant to the edge of the lens to cover the HOE. 2. The method of claim 1, shaping the lens comprising at least one of cutting the lens, grinding the lens, or polishing the lens. 3. The method of claim 1, shaping the lens comprising at least one of cutting the edge of the lens, grinding the edge of the lens, or polishing the edge of the lens. 4. The method of claim 1, applying the sealant to the edge of the lens comprising rolling the sealant onto the edge. 5. The method of claim 1, applying the sealant to the edge of the lens comprising dipping the edge of the lens in the sealant. 6. The method of claim 1, providing the lens comprising: providing the first layer and the second layer; applying the HOE to a back surface of the first layer; and applying the second layer to the HOE to place the HOE between the first and the second layer. 7. The method of claim 6, wherein the HOE is applied to the back surface of the first layer with a pressure sensitive adhesive. 8. The method of claim 1, providing the lens comprising: providing the first layer; applying the HOE to a back surface of the first layer; placing the first layer and the HOE into a mold; and filling the mold with a lens material to form the second layer the HOE. 9. The method of claim 8, providing the first layer comprising filling the mold with the lens material to form the first layer. 10. The method of claim 8, filling the mold comprising casting the lens material into the mold or injecting the lens material into the mold. 11. The method of claim 8, comprising applying at least one of heat or light to cure the lens material. 12. The method of claim 1, wherein the lens is shaped to have an eyewear lens shape. 13. A lens manufactured according to the method of claim 1. 14. A projection system lens, comprising: a first lens layer; a holographic optical element (HOE) affixed to a back surface of the first lens layer; a second lens layer affixed to the HOE; and a sealant disposed on an edge of the lens, wherein joints between the first layer and the HOE and the second layer and the HOE are on the edge, the sealant to cover the joints. 15. The projection system lens of claim 14, wherein the HOE is affixed to the first lens layer and the second lens layer with a pressure sensitive adhesive. 16. The projection system lens of claim 14, wherein the sealant is a polymer. 17. A system for projecting an image, the system comprising: a frame; a lens coupled to the frame, the lens comprising a holographic optical element (HOE) disposed between a first lens layer and a second lens layer and a sealant disposed on an edge of the lens, wherein joints between the first lens layer and the HOE and the second lens layer and the HOE are on the edge; and a projector coupled to the frame, the projector to project light onto the HOE. 18. The system of claim 17, wherein the first lens layer and the second lens layer cast or injected in a mold. 19. The system of claim 17, wherein the HOE is affixed to the first lens layer and the second lens layer with a pressure sensitive adhesive. 20. The system of claim 17, wherein the sealant is a polymer. 21. The system of claim 17, wherein the lens is a glasses lens, a goggle lens, or a helmet visor. 22. The system of claim 21, wherein the frame is glasses, goggles, or a helmet. 23. The system of claim 17, comprising a battery electrically coupled to the projector. 24. The system of claim 17, comprising a graphic processor to receive an image information element to include an indication of an image and the send a display control signal to the projector to cause the projector to project one or more pixels corresponding to the image onto the HOE.
Disclosed herein is a lens for a wearable projection system. The lens includes a holographic optical element disposed between layers of the lens. Joints between the holographic optical element and the lens layers on an edge of the lens are covered with a sealant to protect the holographic optical element.1. A method to manufacture a wearable display lens, comprising: providing a lens having a holographic optical element (HOE) disposed between a first layer and a second layer of the lens, the HOE exposed along an edge of the lens; shaping the lens; and applying a sealant to the edge of the lens to cover the HOE. 2. The method of claim 1, shaping the lens comprising at least one of cutting the lens, grinding the lens, or polishing the lens. 3. The method of claim 1, shaping the lens comprising at least one of cutting the edge of the lens, grinding the edge of the lens, or polishing the edge of the lens. 4. The method of claim 1, applying the sealant to the edge of the lens comprising rolling the sealant onto the edge. 5. The method of claim 1, applying the sealant to the edge of the lens comprising dipping the edge of the lens in the sealant. 6. The method of claim 1, providing the lens comprising: providing the first layer and the second layer; applying the HOE to a back surface of the first layer; and applying the second layer to the HOE to place the HOE between the first and the second layer. 7. The method of claim 6, wherein the HOE is applied to the back surface of the first layer with a pressure sensitive adhesive. 8. The method of claim 1, providing the lens comprising: providing the first layer; applying the HOE to a back surface of the first layer; placing the first layer and the HOE into a mold; and filling the mold with a lens material to form the second layer the HOE. 9. The method of claim 8, providing the first layer comprising filling the mold with the lens material to form the first layer. 10. The method of claim 8, filling the mold comprising casting the lens material into the mold or injecting the lens material into the mold. 11. The method of claim 8, comprising applying at least one of heat or light to cure the lens material. 12. The method of claim 1, wherein the lens is shaped to have an eyewear lens shape. 13. A lens manufactured according to the method of claim 1. 14. A projection system lens, comprising: a first lens layer; a holographic optical element (HOE) affixed to a back surface of the first lens layer; a second lens layer affixed to the HOE; and a sealant disposed on an edge of the lens, wherein joints between the first layer and the HOE and the second layer and the HOE are on the edge, the sealant to cover the joints. 15. The projection system lens of claim 14, wherein the HOE is affixed to the first lens layer and the second lens layer with a pressure sensitive adhesive. 16. The projection system lens of claim 14, wherein the sealant is a polymer. 17. A system for projecting an image, the system comprising: a frame; a lens coupled to the frame, the lens comprising a holographic optical element (HOE) disposed between a first lens layer and a second lens layer and a sealant disposed on an edge of the lens, wherein joints between the first lens layer and the HOE and the second lens layer and the HOE are on the edge; and a projector coupled to the frame, the projector to project light onto the HOE. 18. The system of claim 17, wherein the first lens layer and the second lens layer cast or injected in a mold. 19. The system of claim 17, wherein the HOE is affixed to the first lens layer and the second lens layer with a pressure sensitive adhesive. 20. The system of claim 17, wherein the sealant is a polymer. 21. The system of claim 17, wherein the lens is a glasses lens, a goggle lens, or a helmet visor. 22. The system of claim 21, wherein the frame is glasses, goggles, or a helmet. 23. The system of claim 17, comprising a battery electrically coupled to the projector. 24. The system of claim 17, comprising a graphic processor to receive an image information element to include an indication of an image and the send a display control signal to the projector to cause the projector to project one or more pixels corresponding to the image onto the HOE.
2,600
10,202
10,202
15,002,158
2,621
A multi-view display system that detects and locates viewers, establishes a set of characteristics for the viewers, generates or selects personalized content for each viewer based on the characteristics, and displays the personalized content, via at least one multi-view display, to the plurality of viewers simultaneously is disclosed.
1. A method for providing differentiated content, via a multi-view display, to a plurality of viewers in a viewing space, wherein the content provided to at least some of the viewers is different than the content provided to other of the viewers, the method comprising: determining a location of each of the viewers in the viewing space; establishing a set of characteristics for each of the viewers; and generating differentiated content for presentation, via the multi-view display, to each of the viewers, wherein the differentiated content generated for an associated viewer is based on the set of characteristics established for the associated viewer. 2. The method of claim 1 further comprising displaying, via the multi-view display, the differentiated content to each associated viewer for viewing at their respective locations, wherein the differentiated content is viewable only at the associated viewer's location. 3. The method of claim 1 wherein determining a location further comprises using at least one camera to observe the viewing space. 4. The method of claim 1 wherein determining a location further comprises using beacons. 5. The method of claim 1 wherein determining a location further comprises using a model to predict a future location of a viewer based on the viewer's behavior. 6. The method of claim 1 wherein establishing the set of characteristics further comprises establishing the set of characteristics via inferred identity. 7. The method of claim 6 wherein establishing the set of characteristics via inferred identity further comprises accessing a database containing characteristics for at least some of the viewers. 8. The method of claim 6 wherein establishing the set of characteristics via inferred identity further comprises obtaining a profile identity of the viewer from a social network. 9. The method of claim 1 wherein establishing the set of characteristics further comprises establishing the set of characteristics via observable traits of a viewer. 10. The method of claim 9 wherein observable traits are selected from the group consisting of visible traits of the viewer, behavior of the viewer, location of the viewer, device usage of the viewer. 11. The method of claim 1 wherein the set of characteristics is updated. 12. The method of claim 1 wherein generating content further comprises selecting the content from pre-generated content, wherein the pre-generated content selected for a viewer is associated with the set of characteristics of the viewer. 13. A differentiated content delivery system for delivering personalized content, simultaneously, to a plurality of viewers, comprising: a viewer detection system that determines a location of each viewer; a viewer characterization system that establishes a set of characteristics for each viewer; a content generation system that generates or selects content for each viewer based on the set of characteristics established for each such viewer; and a content presentation system that is capable of delivering, simultaneously and via a single display, personalized content to each of the viewers. 14. The differentiated content delivery system of claim 13 wherein the viewer detection system comprises a machine vision system. 15. The differentiated content delivery system of claim 14 wherein the machine vision system includes an imaging device and an image processing unit. 16. The differentiated content delivery system of claim 14 wherein viewer detection system further comprises at least one of a passive trackable object and an active trackable object, wherein both the passive and active trackable objects are worn or carried by at least some of the viewers. 17. The differentiated content delivery system of claim 13 wherein the viewer characterization system comprises a machine vision system. 18. The differentiated content delivery system of claim 13 wherein the content generation system comprises: a media database, wherein the media database comprises digital representations of a plurality of images; and a processor that is capable of selecting a digital representation of an image that relates to at least some of the characteristics in the set thereof. 19. The differentiated content delivery system of claim 13 wherein the content presentation system comprises a multi-view display that simultaneously delivers the personalized content to a plurality of viewers, wherein the personalized content viewable by each viewer is not viewable by any of the other viewers of the plurality. 20. The differentiated content delivery system of claim 13 wherein: the viewer detection system comprises a plurality of imaging devices distributed throughout a region for locating the viewers as said viewers move throughout the region; and the content presentation system comprises a plurality of multi-view displays distributed throughout the region, each of the multi-view displays having an associated viewing region in which viewers can view content displayed by each such multi-view display, wherein, as viewers enter the viewing region of one of the multi-view displays, the viewer detection system locates the viewer and the one multi-view display displays the personalized content associated with the viewer. 21. The differentiated content delivery system of claim 20 wherein the region is selected from the group consisting of an airline terminal. 22. The differentiated content delivery system of claim 20 wherein the region is selected from the group consisting of transportation facilities, government venues, entertainment venues, educational venues, lodging facilities, religious venues, public facilities, business venues, and medical venues.
A multi-view display system that detects and locates viewers, establishes a set of characteristics for the viewers, generates or selects personalized content for each viewer based on the characteristics, and displays the personalized content, via at least one multi-view display, to the plurality of viewers simultaneously is disclosed.1. A method for providing differentiated content, via a multi-view display, to a plurality of viewers in a viewing space, wherein the content provided to at least some of the viewers is different than the content provided to other of the viewers, the method comprising: determining a location of each of the viewers in the viewing space; establishing a set of characteristics for each of the viewers; and generating differentiated content for presentation, via the multi-view display, to each of the viewers, wherein the differentiated content generated for an associated viewer is based on the set of characteristics established for the associated viewer. 2. The method of claim 1 further comprising displaying, via the multi-view display, the differentiated content to each associated viewer for viewing at their respective locations, wherein the differentiated content is viewable only at the associated viewer's location. 3. The method of claim 1 wherein determining a location further comprises using at least one camera to observe the viewing space. 4. The method of claim 1 wherein determining a location further comprises using beacons. 5. The method of claim 1 wherein determining a location further comprises using a model to predict a future location of a viewer based on the viewer's behavior. 6. The method of claim 1 wherein establishing the set of characteristics further comprises establishing the set of characteristics via inferred identity. 7. The method of claim 6 wherein establishing the set of characteristics via inferred identity further comprises accessing a database containing characteristics for at least some of the viewers. 8. The method of claim 6 wherein establishing the set of characteristics via inferred identity further comprises obtaining a profile identity of the viewer from a social network. 9. The method of claim 1 wherein establishing the set of characteristics further comprises establishing the set of characteristics via observable traits of a viewer. 10. The method of claim 9 wherein observable traits are selected from the group consisting of visible traits of the viewer, behavior of the viewer, location of the viewer, device usage of the viewer. 11. The method of claim 1 wherein the set of characteristics is updated. 12. The method of claim 1 wherein generating content further comprises selecting the content from pre-generated content, wherein the pre-generated content selected for a viewer is associated with the set of characteristics of the viewer. 13. A differentiated content delivery system for delivering personalized content, simultaneously, to a plurality of viewers, comprising: a viewer detection system that determines a location of each viewer; a viewer characterization system that establishes a set of characteristics for each viewer; a content generation system that generates or selects content for each viewer based on the set of characteristics established for each such viewer; and a content presentation system that is capable of delivering, simultaneously and via a single display, personalized content to each of the viewers. 14. The differentiated content delivery system of claim 13 wherein the viewer detection system comprises a machine vision system. 15. The differentiated content delivery system of claim 14 wherein the machine vision system includes an imaging device and an image processing unit. 16. The differentiated content delivery system of claim 14 wherein viewer detection system further comprises at least one of a passive trackable object and an active trackable object, wherein both the passive and active trackable objects are worn or carried by at least some of the viewers. 17. The differentiated content delivery system of claim 13 wherein the viewer characterization system comprises a machine vision system. 18. The differentiated content delivery system of claim 13 wherein the content generation system comprises: a media database, wherein the media database comprises digital representations of a plurality of images; and a processor that is capable of selecting a digital representation of an image that relates to at least some of the characteristics in the set thereof. 19. The differentiated content delivery system of claim 13 wherein the content presentation system comprises a multi-view display that simultaneously delivers the personalized content to a plurality of viewers, wherein the personalized content viewable by each viewer is not viewable by any of the other viewers of the plurality. 20. The differentiated content delivery system of claim 13 wherein: the viewer detection system comprises a plurality of imaging devices distributed throughout a region for locating the viewers as said viewers move throughout the region; and the content presentation system comprises a plurality of multi-view displays distributed throughout the region, each of the multi-view displays having an associated viewing region in which viewers can view content displayed by each such multi-view display, wherein, as viewers enter the viewing region of one of the multi-view displays, the viewer detection system locates the viewer and the one multi-view display displays the personalized content associated with the viewer. 21. The differentiated content delivery system of claim 20 wherein the region is selected from the group consisting of an airline terminal. 22. The differentiated content delivery system of claim 20 wherein the region is selected from the group consisting of transportation facilities, government venues, entertainment venues, educational venues, lodging facilities, religious venues, public facilities, business venues, and medical venues.
2,600
10,203
10,203
12,199,602
2,615
Information for use in a command script for a product dispensing system is compiled by sniffing packets associated with a print job that contains information associated with a product to be dispensed, filtering the packets so as to remove fields not associated with the print job, and combining the filtered packets to generate contiguous data associated with the print job. A command script for a product dispensing system is generated by receiving data associated with a print job that contains information associated with a product to be dispensed, extracting the information associated with the product from the print job data, organizing the information associated with the product into discrete informational units, and generating the command script based on the organized discrete information units.
1. A method of compiling information for use in a command script for a product dispensing system, comprising: sniffing packets associated with a print job that contains information associated with a product to be dispensed; filtering the packets so as to remove fields not associated with the print job; and combining the filtered packets to generate contiguous data associated with the print job. 2. The method of claim 1, further comprising: extracting the information associated with the product from the contiguous data; and organizing the information associated with the product into discrete informational units. 3. The method of claim 2, wherein the discrete information units comprise discrete textual items. 4. The method of claim 3, wherein each of the discrete textual items comprises an acronym, word, number, and/or phrase. 5. The method of claim 3, wherein organizing the information associated with the product comprises: associating Cartesian coordinates with each of the discrete textual items; associating an orientation indicator for each of the discrete textual items; and/or associating a page number of the print job for each of the discrete textual items that identifies a page that the respective textual item would have printed on. 6. The method of claim 2, wherein the packets comprise Page Description Language (PDL) data. 7. The method of claim 6, wherein extracting the information comprises: extracting Postscript or Printer Command Language (PCL) data. 8. The method of claim 6, wherein extracting the information comprises: extracting raster graphics or binary data; and performing optical character recognition on the raster graphics or binary data. 9. The method of claim 1, wherein the packets are TCP/IP packets. 10. The method of claim 1, wherein sniffing the packets comprises sniffing the packets via a network hub that is configured to present all packet traffic to all ports on the hub. 11. The method of claim 1, wherein sniffing the packets comprises sniffing the packets via a switch that is configured to mirror traffic destined for at least one printer to a port on the switch. 12. The method of claim 1, wherein the product dispensing system is a pharmaceutical product dispensing system. 13. The method of claim 1, wherein the product dispensing system is a tablet dispensing system. 14. A method of generating a command script for a product dispensing system, comprising: receiving data associated with a print job that contains information associated with a product to be dispensed; extracting the information associated with the product from the print job data; organizing the information associated with the product into discrete informational units; and generating the command script based on the organized discrete information units. 15. The method of claim 14, wherein the discrete information units comprise discrete textual items. 16. The method of claim 15, wherein each of the discrete textual items comprises an acronym, word, number, and/or phrase. 17. The method of claim 15, wherein organizing the information associated with the product comprises: associating Cartesian coordinates with each of the discrete textual items; associating an orientation indicator for each of the discrete textual items; and/or associating a page number of the print job for each of the discrete textual items that identifies a page that the respective textual item would have printed on. 18. The method of claim 14, further comprising: generating a bitmap file containing the extracted information associated with the product. 19. The method of claim 18, further comprising: identifying script fields in the bitmap file; determining Cartesian coordinate regions for each of the identified script fields; determining an orientation indicator for each of the identified script fields; and/or determining a page number of the print job for each of the identified script fields that identifies a page that the respective script field would have printed on. 20. The method of claim 19, further comprising: associating respective ones of the script fields with respective ones of the discrete textual items such that each of the script fields is associated with at least one of the discrete textual items; and wherein generating the command script comprises generating the command script based on the associations between the script fields and the discrete textual items. 21. The method of claim 14, wherein extracting the information comprises: extracting Postscript or Printer Command Language (PCL) data. 22. The method of claim 14, wherein extracting the information comprises: extracting raster graphics or binary data; and performing optical character recognition on the raster graphics or binary data. 23. The method of claim 14, wherein the product dispensing system is a pharmaceutical product dispensing system. 24. The method of claim 14, wherein the product dispensing system is a tablet dispensing system. 25. A system for compiling information for use in a command script for a product dispensing system, comprising: a data processing system that is configured to sniff packets associated with a print job that contains information associated with a product to be dispensed, filter the packets so as to remove fields not associated with the print job, and combine the filtered packets to generate contiguous data associated with the print job. 26. A system for generating a command script for a product dispensing system, comprising: a data processing system that is configured to receive data associated with a print job that contains information associated with a product to be dispensed, extract the information associated with the product from the print job data, organize the information associated with the product into discrete informational units, and generate the command script based on the organized discrete information units. 27. A computer program product for compiling information for use in a command script for a product dispensing system comprising: a computer readable storage medium having computer readable program code embodied therein, the computer readable program code comprising: computer readable program code configured to sniff packets associated with a print job that contains information associated with a product to be dispensed; computer readable program code configured to filter the packets so as to remove fields not associated with the print job; and computer readable program code configured to combine the filtered packets to generate contiguous data associated with the print job. 28. A computer program product for generating a command script for a product dispensing system comprising: a computer readable storage medium having computer readable program code embodied therein, the computer readable program code comprising: computer readable program code configured to receive data associated with a print job that contains information associated with a product to be dispensed; computer readable program code configured to extract the information associated with the product from the print job data; computer readable program code configured to organize the information associated with the product into discrete informational units; and computer readable program code configured to generate the command script based on the organized discrete information units.
Information for use in a command script for a product dispensing system is compiled by sniffing packets associated with a print job that contains information associated with a product to be dispensed, filtering the packets so as to remove fields not associated with the print job, and combining the filtered packets to generate contiguous data associated with the print job. A command script for a product dispensing system is generated by receiving data associated with a print job that contains information associated with a product to be dispensed, extracting the information associated with the product from the print job data, organizing the information associated with the product into discrete informational units, and generating the command script based on the organized discrete information units.1. A method of compiling information for use in a command script for a product dispensing system, comprising: sniffing packets associated with a print job that contains information associated with a product to be dispensed; filtering the packets so as to remove fields not associated with the print job; and combining the filtered packets to generate contiguous data associated with the print job. 2. The method of claim 1, further comprising: extracting the information associated with the product from the contiguous data; and organizing the information associated with the product into discrete informational units. 3. The method of claim 2, wherein the discrete information units comprise discrete textual items. 4. The method of claim 3, wherein each of the discrete textual items comprises an acronym, word, number, and/or phrase. 5. The method of claim 3, wherein organizing the information associated with the product comprises: associating Cartesian coordinates with each of the discrete textual items; associating an orientation indicator for each of the discrete textual items; and/or associating a page number of the print job for each of the discrete textual items that identifies a page that the respective textual item would have printed on. 6. The method of claim 2, wherein the packets comprise Page Description Language (PDL) data. 7. The method of claim 6, wherein extracting the information comprises: extracting Postscript or Printer Command Language (PCL) data. 8. The method of claim 6, wherein extracting the information comprises: extracting raster graphics or binary data; and performing optical character recognition on the raster graphics or binary data. 9. The method of claim 1, wherein the packets are TCP/IP packets. 10. The method of claim 1, wherein sniffing the packets comprises sniffing the packets via a network hub that is configured to present all packet traffic to all ports on the hub. 11. The method of claim 1, wherein sniffing the packets comprises sniffing the packets via a switch that is configured to mirror traffic destined for at least one printer to a port on the switch. 12. The method of claim 1, wherein the product dispensing system is a pharmaceutical product dispensing system. 13. The method of claim 1, wherein the product dispensing system is a tablet dispensing system. 14. A method of generating a command script for a product dispensing system, comprising: receiving data associated with a print job that contains information associated with a product to be dispensed; extracting the information associated with the product from the print job data; organizing the information associated with the product into discrete informational units; and generating the command script based on the organized discrete information units. 15. The method of claim 14, wherein the discrete information units comprise discrete textual items. 16. The method of claim 15, wherein each of the discrete textual items comprises an acronym, word, number, and/or phrase. 17. The method of claim 15, wherein organizing the information associated with the product comprises: associating Cartesian coordinates with each of the discrete textual items; associating an orientation indicator for each of the discrete textual items; and/or associating a page number of the print job for each of the discrete textual items that identifies a page that the respective textual item would have printed on. 18. The method of claim 14, further comprising: generating a bitmap file containing the extracted information associated with the product. 19. The method of claim 18, further comprising: identifying script fields in the bitmap file; determining Cartesian coordinate regions for each of the identified script fields; determining an orientation indicator for each of the identified script fields; and/or determining a page number of the print job for each of the identified script fields that identifies a page that the respective script field would have printed on. 20. The method of claim 19, further comprising: associating respective ones of the script fields with respective ones of the discrete textual items such that each of the script fields is associated with at least one of the discrete textual items; and wherein generating the command script comprises generating the command script based on the associations between the script fields and the discrete textual items. 21. The method of claim 14, wherein extracting the information comprises: extracting Postscript or Printer Command Language (PCL) data. 22. The method of claim 14, wherein extracting the information comprises: extracting raster graphics or binary data; and performing optical character recognition on the raster graphics or binary data. 23. The method of claim 14, wherein the product dispensing system is a pharmaceutical product dispensing system. 24. The method of claim 14, wherein the product dispensing system is a tablet dispensing system. 25. A system for compiling information for use in a command script for a product dispensing system, comprising: a data processing system that is configured to sniff packets associated with a print job that contains information associated with a product to be dispensed, filter the packets so as to remove fields not associated with the print job, and combine the filtered packets to generate contiguous data associated with the print job. 26. A system for generating a command script for a product dispensing system, comprising: a data processing system that is configured to receive data associated with a print job that contains information associated with a product to be dispensed, extract the information associated with the product from the print job data, organize the information associated with the product into discrete informational units, and generate the command script based on the organized discrete information units. 27. A computer program product for compiling information for use in a command script for a product dispensing system comprising: a computer readable storage medium having computer readable program code embodied therein, the computer readable program code comprising: computer readable program code configured to sniff packets associated with a print job that contains information associated with a product to be dispensed; computer readable program code configured to filter the packets so as to remove fields not associated with the print job; and computer readable program code configured to combine the filtered packets to generate contiguous data associated with the print job. 28. A computer program product for generating a command script for a product dispensing system comprising: a computer readable storage medium having computer readable program code embodied therein, the computer readable program code comprising: computer readable program code configured to receive data associated with a print job that contains information associated with a product to be dispensed; computer readable program code configured to extract the information associated with the product from the print job data; computer readable program code configured to organize the information associated with the product into discrete informational units; and computer readable program code configured to generate the command script based on the organized discrete information units.
2,600
10,204
10,204
15,193,861
2,619
A method of editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time is disclosed, wherein the structure and one or more texture are processed and output same in a user interface, and user input is read in the user interface and processed into a cut shape of the three-dimensional structure. A simplified structure is generated based on the three-dimensional structure, and points of the cut shape are associated with the simplified structure to generate a curve. Points of the curve corresponding to edges of the curve on the simplified structure are determined, and geometrical characteristics and texture coordinates of the new points calculated. A new three dimensional structure is generated along the curve and layers of the structure are joined, for the cut and layered structure to be rendered in the user interface. An apparatus embodying the method is also disclosed.
1. An apparatus for editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time, comprising: a. data storage means adapted to store the digital three-dimensional structure and the one or more two-dimensional texture ; b. data processing means adapted to process the stored digital three-dimensional structure and the one or more two-dimensional texture and output same in a user interface, read user input in the user interface and process user input data into a cut shape of the three-dimensional structure, generate a simplified structure based on the three-dimensional structure, associate points of the cut shape with the simplified structure to generate a curve, determine new points of the curve corresponding to edges of the curve on the simplified structure, calculate geometrical characteristics and texture coordinates of the new points, and generate a new three dimensional structure along the curve and join layers of the structure ; and c. display means for displaying the user interface. 2. An apparatus according to claim 1, wherein the data processing means is further adapted to alter a rendering perspective of the stored and processed digital three-dimensional structure and the one or more two-dimensional texture in the user interface via one or more rotations, translations, scaling. 3. An apparatus according to claim 1, wherein the data processing means is further adapted to generate the simplified structure as a temporary three-dimensional structure. 4. An apparatus according to claim 3, wherein the temporary structure comprises a HalfEdge mesh associated with a C-Mesh. 5. An apparatus according to claim 1, wherein the data processing means is further adapted to associate points by painting the cut shape on the three-dimensional structure as a plurality of points converted to three-dimensional space and read from an off-screen depth buffer, defining a curve. 6. An apparatus according to claim 5, wherein the data processing means is further adapted process the plurality of points with a Laplacian smoothing algorithm and a Ramer-Douglas-Peucker algorithm 7. An apparatus according to claim 6, wherein the data processing means is further adapted to determine new points by projecting approximate intersection points of the painted curve in two dimensions then mirroring back to find intersections on triangles. 8. An apparatus according to claim 7, wherein the data processing means is further adapted to calculate by identifying a number of intersection points on triangle faces to determine a number and an order of layers associated with the simplified structure. 9. An apparatus according to claim 8, wherein the data processing means is further adapted to calculate by determining points of each triangle intersecting another triangle to identify divided triangles having two splitable edges 10. An apparatus according to claim 9, wherein the data processing means is further adapted to split triangle edges by building a plane from adjacent intersection points of two triangles and determining an intersection point for the plane and edge. 11. An apparatus according to claim 10, wherein the data processing means is further adapted to joining layers further comprises joining all points of two neighbouring layers via triangulation. 12. An apparatus according to claim 1, wherein the stored digital three-dimensional structure and the one or more two-dimensional texture comprises learning plates in a medical reference. 13. A method of editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time, comprising the steps of: a. processing the digital three-dimensional structure and the one or more two-dimensional texture and outputting same in a user interface, b. reading user input in the user interface and process user input data into a cut shape of the three-dimensional structure, c. generating a simplified structure based on the three-dimensional structure, d. associating points of the cut shape with the simplified structure to generate a curve, e. determining new points of the curve corresponding to edges of the curve on the simplified structure, f. calculate geometrical characteristics and texture coordinates of the new points, g. generating a new three dimensional structure along the curve and joining layers of the structure, and h. updating the user interlace. 14. A method according to claim 13, comprising the further step of altering a rendering perspective of the stored and processed digital three-dimensional structure and the one or more two-dimensional texture in the user interface via one or more rotations, translations, scaling. 15. A method according to claim 13, wherein generating the simplified structure further comprises generating a temporary three-dimensional structure. 16. A method according to claim 13, wherein the temporary structure comprises a Hal Edge mesh associated with a C-Mesh. 17. A method according to claim 13, wherein associating points comprises the further step of painting the cut shape on the three-dimensional structure as a plurality of points converted to three-dimensional space and read from an off-screen depth buffer, defining a curve. 18. A method according to claim 17, wherein determining new points comprises the further step of processing the plurality of points with a Laplacian smoothing algorithm and a Ramer-Douglas-Peucker algorithm 19. A method according to claim 18, wherein determining new points comprises the further step of projecting approximate intersection points of the painted curve in two dimensions then mirroring back to find intersections on triangles. 20. A method according to claim 19, wherein calculating comprises the further step of identifying a number of intersection points on triangle faces distance to determine a number and an order of layers associated with the simplified structure. 21. A method according to claim 20, wherein calculating comprises the further step of determining points of each triangle intersecting another triangle to identify divided triangles having two splitable edges. 22. A method according to claim 21, wherein calculating comprises the further step of building a plane from adjacent intersection points of two triangles and determining an intersection point for the plane and edge. 23. A method according to claim 22, wherein joining layers comprises the further step of joining all points of two neighbouring layers via triangulation. 24. A method according to claim 22, wherein the stored digital three-dimensional structure and the one or more two-dimensional texture comprises learning plates in a medical reference. 25. A computer program product recorded on a data carrying medium which, when processed by a data processing terminal, configures the terminal to a. process a digital three-dimensional structure and one or more two-dimensional texture and output same in a user interface, b. read user input in the user interface and process user input data into a cut shape of the three-dimensional structure, c. generate a simplified structure based on the three-dimensional structure, d. associate points of the cut shape with the simplified structure to generate a curve, e. determine new points of the curve corresponding to edges of the curve on the simplified structure, f. calculate geometrical characteristics and texture coordinates of the new points, g. generate a new three dimensional structure along the curve and join layers of the structure and h. update the user interface.
A method of editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time is disclosed, wherein the structure and one or more texture are processed and output same in a user interface, and user input is read in the user interface and processed into a cut shape of the three-dimensional structure. A simplified structure is generated based on the three-dimensional structure, and points of the cut shape are associated with the simplified structure to generate a curve. Points of the curve corresponding to edges of the curve on the simplified structure are determined, and geometrical characteristics and texture coordinates of the new points calculated. A new three dimensional structure is generated along the curve and layers of the structure are joined, for the cut and layered structure to be rendered in the user interface. An apparatus embodying the method is also disclosed.1. An apparatus for editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time, comprising: a. data storage means adapted to store the digital three-dimensional structure and the one or more two-dimensional texture ; b. data processing means adapted to process the stored digital three-dimensional structure and the one or more two-dimensional texture and output same in a user interface, read user input in the user interface and process user input data into a cut shape of the three-dimensional structure, generate a simplified structure based on the three-dimensional structure, associate points of the cut shape with the simplified structure to generate a curve, determine new points of the curve corresponding to edges of the curve on the simplified structure, calculate geometrical characteristics and texture coordinates of the new points, and generate a new three dimensional structure along the curve and join layers of the structure ; and c. display means for displaying the user interface. 2. An apparatus according to claim 1, wherein the data processing means is further adapted to alter a rendering perspective of the stored and processed digital three-dimensional structure and the one or more two-dimensional texture in the user interface via one or more rotations, translations, scaling. 3. An apparatus according to claim 1, wherein the data processing means is further adapted to generate the simplified structure as a temporary three-dimensional structure. 4. An apparatus according to claim 3, wherein the temporary structure comprises a HalfEdge mesh associated with a C-Mesh. 5. An apparatus according to claim 1, wherein the data processing means is further adapted to associate points by painting the cut shape on the three-dimensional structure as a plurality of points converted to three-dimensional space and read from an off-screen depth buffer, defining a curve. 6. An apparatus according to claim 5, wherein the data processing means is further adapted process the plurality of points with a Laplacian smoothing algorithm and a Ramer-Douglas-Peucker algorithm 7. An apparatus according to claim 6, wherein the data processing means is further adapted to determine new points by projecting approximate intersection points of the painted curve in two dimensions then mirroring back to find intersections on triangles. 8. An apparatus according to claim 7, wherein the data processing means is further adapted to calculate by identifying a number of intersection points on triangle faces to determine a number and an order of layers associated with the simplified structure. 9. An apparatus according to claim 8, wherein the data processing means is further adapted to calculate by determining points of each triangle intersecting another triangle to identify divided triangles having two splitable edges 10. An apparatus according to claim 9, wherein the data processing means is further adapted to split triangle edges by building a plane from adjacent intersection points of two triangles and determining an intersection point for the plane and edge. 11. An apparatus according to claim 10, wherein the data processing means is further adapted to joining layers further comprises joining all points of two neighbouring layers via triangulation. 12. An apparatus according to claim 1, wherein the stored digital three-dimensional structure and the one or more two-dimensional texture comprises learning plates in a medical reference. 13. A method of editing a digital three-dimensional structure associated with one or more two-dimensional texture in real time, comprising the steps of: a. processing the digital three-dimensional structure and the one or more two-dimensional texture and outputting same in a user interface, b. reading user input in the user interface and process user input data into a cut shape of the three-dimensional structure, c. generating a simplified structure based on the three-dimensional structure, d. associating points of the cut shape with the simplified structure to generate a curve, e. determining new points of the curve corresponding to edges of the curve on the simplified structure, f. calculate geometrical characteristics and texture coordinates of the new points, g. generating a new three dimensional structure along the curve and joining layers of the structure, and h. updating the user interlace. 14. A method according to claim 13, comprising the further step of altering a rendering perspective of the stored and processed digital three-dimensional structure and the one or more two-dimensional texture in the user interface via one or more rotations, translations, scaling. 15. A method according to claim 13, wherein generating the simplified structure further comprises generating a temporary three-dimensional structure. 16. A method according to claim 13, wherein the temporary structure comprises a Hal Edge mesh associated with a C-Mesh. 17. A method according to claim 13, wherein associating points comprises the further step of painting the cut shape on the three-dimensional structure as a plurality of points converted to three-dimensional space and read from an off-screen depth buffer, defining a curve. 18. A method according to claim 17, wherein determining new points comprises the further step of processing the plurality of points with a Laplacian smoothing algorithm and a Ramer-Douglas-Peucker algorithm 19. A method according to claim 18, wherein determining new points comprises the further step of projecting approximate intersection points of the painted curve in two dimensions then mirroring back to find intersections on triangles. 20. A method according to claim 19, wherein calculating comprises the further step of identifying a number of intersection points on triangle faces distance to determine a number and an order of layers associated with the simplified structure. 21. A method according to claim 20, wherein calculating comprises the further step of determining points of each triangle intersecting another triangle to identify divided triangles having two splitable edges. 22. A method according to claim 21, wherein calculating comprises the further step of building a plane from adjacent intersection points of two triangles and determining an intersection point for the plane and edge. 23. A method according to claim 22, wherein joining layers comprises the further step of joining all points of two neighbouring layers via triangulation. 24. A method according to claim 22, wherein the stored digital three-dimensional structure and the one or more two-dimensional texture comprises learning plates in a medical reference. 25. A computer program product recorded on a data carrying medium which, when processed by a data processing terminal, configures the terminal to a. process a digital three-dimensional structure and one or more two-dimensional texture and output same in a user interface, b. read user input in the user interface and process user input data into a cut shape of the three-dimensional structure, c. generate a simplified structure based on the three-dimensional structure, d. associate points of the cut shape with the simplified structure to generate a curve, e. determine new points of the curve corresponding to edges of the curve on the simplified structure, f. calculate geometrical characteristics and texture coordinates of the new points, g. generate a new three dimensional structure along the curve and join layers of the structure and h. update the user interface.
2,600
10,205
10,205
15,135,669
2,626
According to a first aspect of the present disclosure an electronic device is provided, which comprises a non-conductive substrate and a touch-based user interface unit having a capacitive sensor structure, wherein said capacitive sensor structure comprises conductive wires embedded in the non-conductive substrate. According to a second aspect of the present disclosure a corresponding method of manufacturing an electronic device is conceived.
1. An electronic device comprising a non-conductive substrate and a touch-based user interface unit having a capacitive sensor structure, wherein said capacitive sensor structure comprises conductive wires embedded in the non-conductive substrate. 2. A device as claimed in claim 1, wherein said capacitive sensor structure comprises a pair of adjacent conductive wires embedded in the non-conductive substrate, and wherein each of said adjacent conductive wires functions as an electrode in the touch-based user interface unit. 3. A device as claimed in claim 1, wherein at least some of said conductive wires are arranged in a meander pattern. 4. A device as claimed in claim 1, wherein at least some of said conductive wires are arranged in a spiral pattern. 5. A device as claimed in claim 1, wherein said capacitive sensor structure further comprises wire terminals embedded in the non-conductive substrate. 6. A device as claimed in claim 5, further comprising a communication and processing module having contact pads, wherein the wire terminals are connected to said contact pads. 7. A device as claimed in claim 6, wherein the wire terminals have been prepared for connection to the contact pads by carrying out a milling process. 8. A device as claimed in claim 1, further comprising an antenna embedded in the non-conductive substrate. 9. A device as claimed in claim 8, wherein the conductive wires are made of the same material as said antenna. 10. A device as claimed in claim 8, said device having been provided with the antenna and with the touch-based user interface unit in a single manufacturing step. 11. A device as claimed in claim 1, wherein the conductive wires are insulated conductive wires. 12. A device as claimed in claim 1, wherein the non-conductive substrate is a thermoplastic substrate. 13. A device as claimed in claim 1, wherein the conductive wires are copper wires. 14. A device as claimed in claim 1, being a smart card. 15. A method of manufacturing an electronic device, the method comprising providing the electronic device with a non-conductive substrate and providing the electronic device with a touch-based user interface unit having a capacitive sensor structure, wherein said capacitive sensor structure is formed by embedding conductive wires into the non-conductive substrate.
According to a first aspect of the present disclosure an electronic device is provided, which comprises a non-conductive substrate and a touch-based user interface unit having a capacitive sensor structure, wherein said capacitive sensor structure comprises conductive wires embedded in the non-conductive substrate. According to a second aspect of the present disclosure a corresponding method of manufacturing an electronic device is conceived.1. An electronic device comprising a non-conductive substrate and a touch-based user interface unit having a capacitive sensor structure, wherein said capacitive sensor structure comprises conductive wires embedded in the non-conductive substrate. 2. A device as claimed in claim 1, wherein said capacitive sensor structure comprises a pair of adjacent conductive wires embedded in the non-conductive substrate, and wherein each of said adjacent conductive wires functions as an electrode in the touch-based user interface unit. 3. A device as claimed in claim 1, wherein at least some of said conductive wires are arranged in a meander pattern. 4. A device as claimed in claim 1, wherein at least some of said conductive wires are arranged in a spiral pattern. 5. A device as claimed in claim 1, wherein said capacitive sensor structure further comprises wire terminals embedded in the non-conductive substrate. 6. A device as claimed in claim 5, further comprising a communication and processing module having contact pads, wherein the wire terminals are connected to said contact pads. 7. A device as claimed in claim 6, wherein the wire terminals have been prepared for connection to the contact pads by carrying out a milling process. 8. A device as claimed in claim 1, further comprising an antenna embedded in the non-conductive substrate. 9. A device as claimed in claim 8, wherein the conductive wires are made of the same material as said antenna. 10. A device as claimed in claim 8, said device having been provided with the antenna and with the touch-based user interface unit in a single manufacturing step. 11. A device as claimed in claim 1, wherein the conductive wires are insulated conductive wires. 12. A device as claimed in claim 1, wherein the non-conductive substrate is a thermoplastic substrate. 13. A device as claimed in claim 1, wherein the conductive wires are copper wires. 14. A device as claimed in claim 1, being a smart card. 15. A method of manufacturing an electronic device, the method comprising providing the electronic device with a non-conductive substrate and providing the electronic device with a touch-based user interface unit having a capacitive sensor structure, wherein said capacitive sensor structure is formed by embedding conductive wires into the non-conductive substrate.
2,600
10,206
10,206
15,336,614
2,657
A computer-implemented method according to one embodiment includes receiving audio data, processing the audio data to determine a plurality of words spoken by a user, and analyzing the plurality of words to determine a behavior of the user.
1. A computer-implemented method, comprising: receiving, from a mobile device at a cloud computing environment, audio data including a first instance of recorded audio, where the first instance of recorded audio includes sounds made within a vicinity of the mobile device and is associated with a timestamp and a geographical location; processing the audio data within the cloud computing environment utilizing natural language processing to determine textual data representing a plurality of words and non-word sounds spoken by a first user to a second user within the audio data, as well as an identification of the first user and the second user; analyzing the plurality of words within the cloud computing environment to determine a behavior of the first user, including identifying one or more matches between the plurality of words and non-word sounds spoken by the first user to the second user and a plurality of categorized words and non-word sounds that are associated with predetermined behavior; receiving, from the mobile device or another mobile device at the cloud computing environment, additional audio data occurring before and after the first instance of recorded audio, the additional audio data including additional instances of recorded audio that are each associated with an additional timestamp; processing the additional audio data within the cloud computing environment utilizing the natural language processing to determine additional textual data representing an additional plurality of words and non-word sounds spoken by the second user within the additional audio data; analyzing the additional plurality of words within the cloud computing environment to determine a behavior of the second user before the first instance of recorded audio and after the first instance of recorded audio, including identifying one or more additional matches between the additional plurality of words and non-word sounds spoken by the second user and the plurality of categorized words and non-word sounds that are associated with the predetermined behavior; and determining within the cloud computing environment an effect the behavior of the first user determined during the first instance of recorded audio has on the behavior of the second user over time by analyzing the behavior of the first user determined during the first instance of recorded audio together with the behavior of the second user determined during the additional instances of recorded audio that occur before and after the first instance of recorded audio where the effect the behavior of the first user has on the behavior of the second user over time includes one or more changes in the behavior of the second user that are due to the behavior of the first user. 2. (canceled) 3. (canceled) 4. The computer-implemented method of claim 1, wherein processing the audio data includes identifying the first user as a source of the plurality of words spoken by the first user by comparing the audio data to one or more predetermined voiceprints. 5. The computer-implemented method of claim 1, wherein the cloud computing environment includes a plurality of remote processing devices that provides a set of functional abstraction layers and enables on-demand access to a shared pool of configurable computing resources. 6. The computer-implemented method of claim 1, wherein analyzing the plurality of words further includes: identifying one or more patterns within the plurality of words spoken by the first user; and comparing the one or more patterns to one or more predetermined patterns, where each of the one or more predetermined patterns is associated with the predetermined behavior, and where the one or more predetermined patterns were extracted from one or more knowledge bases and research papers. 7. The computer-implemented method of claim 1, wherein analyzing the plurality of words includes: identifying one or more predetermined keywords within the plurality of words by comparing the plurality of words to a predetermined keyword database; and determining a behavior associated with the one or more predetermined keywords identified within the plurality of words. 8. The computer-implemented method of claim 1, wherein: the geographical location associated with the first instance of recorded audio includes a location of the mobile device obtained utilizing a global positioning system (GPS) module, and the first instance of recorded audio includes results of monitoring all sounds within a vicinity of the mobile device, at a predetermined time associated with the timestamp, and at a predetermined location associated with the geographical location. 9. The computer-implemented method of claim 1, wherein the additional audio data includes sounds made within a vicinity of one or more monitoring devices other than the mobile device, and includes one or more verbal statements made by the second user when the second user is alone. 10. (canceled) 11. A computer program product for determining a behavior of a user utilizing audio data, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions executable by a processor to cause the processor to perform a method comprising: receiving from a mobile device, utilizing a processor at a cloud computing environment, audio data including a first instance of recorded audio, where the first instance of recorded audio includes sounds made within a vicinity of the mobile device and is associated with a timestamp and a geographical location; processing, utilizing the processor, the audio data within the cloud computing environment utilizing natural language processing to determine textual data representing a plurality of words and non-word sounds spoken by a first user to a second user within the audio data, as well as an identification of the first user and the second user; analyzing, utilizing the processor, the plurality of words within the cloud computing environment to determine a behavior of the first user, including identifying one or more matches between the plurality of words and non-word sounds spoken by the first user to the second user and a plurality of categorized words and non-word sounds that are associated with predetermined behavior; receiving from the mobile device or another mobile device, utilizing the processor at the cloud computing environment, additional audio data occurring before and after the first instance of recorded audio, the additional audio data including additional instances of recorded audio that are each associated with an additional timestamp; processing, utilizing the processor at the cloud computing environment, the additional audio data utilizing the natural language processing to determine additional textual data representing an additional plurality of words and non-word sounds spoken by the second user within the additional audio data; analyzing, utilizing the processor at the cloud computing environment, the additional plurality of words to determine a behavior of the second user before the first instance of recorded audio and after the first instance of recorded audio, including identifying one or more additional matches between the additional plurality of words and non-word sounds spoken by the second user and the plurality of categorized words and non-word sounds that are associated with the predetermined behavior; and determining, utilizing the processor at the cloud computing environment, an effect the behavior of the first user determined during the first instance of recorded audio has on the behavior of the second user over time by analyzing the behavior of the first user determined during the first instance of recorded audio together with the behavior of the second user determined during the additional instances of recorded audio that occur before and after the first instance of recorded audio, where the effect the behavior of the first user has on the behavior of the second user over time includes one or more changes in the behavior of the second user that are due to the behavior of the first user. 12. (canceled) 13. The computer program product of claim 11, wherein processing the audio data includes identifying, utilizing the processor, the first user as a source of the plurality of words spoken by the first user by comparing the audio data to one or more predetermined voiceprints. 14. The computer program product of claim 11, wherein the first instance of recorded audio includes all audio recorded at a predetermined location. 15. The computer program product of claim 11, wherein analyzing the plurality of words further includes: identifying, utilizing the processor, one or more patterns within the plurality of words spoken by the user; and comparing, utilizing the processor, the one or more patterns to one or more predetermined patterns, where each of the one or more predetermined patterns is associated with the predetermined behavior, and where the one or more predetermined patterns were extracted from one or more knowledge bases and research papers. 16. The computer program product of claim 11, wherein analyzing the plurality of words includes: identifying, utilizing the processor, one or more predetermined keywords within the plurality of words by comparing the plurality of words to a predetermined keyword database; and determining, utilizing the processor, a behavior associated with the one or more predetermined keywords identified within the plurality of words. 17. The computer program product of claim 11, wherein: the geographical location associated with the first instance of recorded audio includes a location of the mobile device obtained utilizing a global positioning system (GPS) module, and the first instance of recorded audio includes results of monitoring all sounds within a vicinity of the mobile device, at a predetermined time associated with the timestamp, and at a predetermined location associated with the geographical location. 18. The computer program product of claim 11, wherein the additional audio data includes sounds made within a vicinity of one or more monitoring devices other than the mobile device, and includes one or more verbal statements made by the second user when the second user is alone. 19. (canceled) 20. A computer-implemented method, comprising: determining, at a mobile device, that a current time and a current location of the mobile device meet a predetermined time and predetermined location; in response to determining that the current time and the current location of the mobile device meet a predetermined time and predetermined location, monitoring and recording as a first instance of recorded audio all audio data within a range of a microphone of the mobile device, where the first instance of recorded audio is associated with a timestamp and a geographical location; processing, at the mobile device, the audio data utilizing natural language processing to determine textual data representing a plurality of words and non-word sounds spoken by a first user to a second user within the audio data, as well as an identification of the first user and the second user; analyzing, at the mobile device, the plurality of words to determine a behavior of the first user, including identifying one or more matches between the plurality of words and non-word sounds spoken by the first user to the second user and a plurality of categorized words and non-word sounds that are associated with predetermined behavior; recording, at the mobile device, additional audio data occurring before and after the first instance of recorded audio, the additional audio data including additional instances of recorded audio that are each associated with an additional timestamp; processing, at the mobile device, the additional audio data utilizing the natural language processing to determine additional textual data representing an additional plurality of words and non-word sounds spoken by the second user within the additional audio data; analyzing, at the mobile device, the additional plurality of words to determine a behavior of the second user before the first instance of recorded audio and after the first instance of recorded audio, including identifying one or more additional matches between the additional plurality of words and non-word sounds spoken by the second user and the plurality of categorized words and non-word sounds that are associated with the predetermined behavior; and determining, at the mobile device, an effect the behavior of the first user determined during the first instance of recorded audio has on the behavior of the second user over time by analyzing the behavior of the first user determined during the first instance of recorded audio together with the behavior of the second user determined during the additional instances of recorded audio that occur before and after the first instance of recorded audio, where the effect the behavior of the first user has on the behavior of the second user over time includes one or more changes in the behavior of the second user that are due to the behavior of the first user. 21. The computer-implemented method of claim 1, wherein the additional instances of recorded audio that are each associated with the additional timestamp are also each associated with an additional geographical location.
A computer-implemented method according to one embodiment includes receiving audio data, processing the audio data to determine a plurality of words spoken by a user, and analyzing the plurality of words to determine a behavior of the user.1. A computer-implemented method, comprising: receiving, from a mobile device at a cloud computing environment, audio data including a first instance of recorded audio, where the first instance of recorded audio includes sounds made within a vicinity of the mobile device and is associated with a timestamp and a geographical location; processing the audio data within the cloud computing environment utilizing natural language processing to determine textual data representing a plurality of words and non-word sounds spoken by a first user to a second user within the audio data, as well as an identification of the first user and the second user; analyzing the plurality of words within the cloud computing environment to determine a behavior of the first user, including identifying one or more matches between the plurality of words and non-word sounds spoken by the first user to the second user and a plurality of categorized words and non-word sounds that are associated with predetermined behavior; receiving, from the mobile device or another mobile device at the cloud computing environment, additional audio data occurring before and after the first instance of recorded audio, the additional audio data including additional instances of recorded audio that are each associated with an additional timestamp; processing the additional audio data within the cloud computing environment utilizing the natural language processing to determine additional textual data representing an additional plurality of words and non-word sounds spoken by the second user within the additional audio data; analyzing the additional plurality of words within the cloud computing environment to determine a behavior of the second user before the first instance of recorded audio and after the first instance of recorded audio, including identifying one or more additional matches between the additional plurality of words and non-word sounds spoken by the second user and the plurality of categorized words and non-word sounds that are associated with the predetermined behavior; and determining within the cloud computing environment an effect the behavior of the first user determined during the first instance of recorded audio has on the behavior of the second user over time by analyzing the behavior of the first user determined during the first instance of recorded audio together with the behavior of the second user determined during the additional instances of recorded audio that occur before and after the first instance of recorded audio where the effect the behavior of the first user has on the behavior of the second user over time includes one or more changes in the behavior of the second user that are due to the behavior of the first user. 2. (canceled) 3. (canceled) 4. The computer-implemented method of claim 1, wherein processing the audio data includes identifying the first user as a source of the plurality of words spoken by the first user by comparing the audio data to one or more predetermined voiceprints. 5. The computer-implemented method of claim 1, wherein the cloud computing environment includes a plurality of remote processing devices that provides a set of functional abstraction layers and enables on-demand access to a shared pool of configurable computing resources. 6. The computer-implemented method of claim 1, wherein analyzing the plurality of words further includes: identifying one or more patterns within the plurality of words spoken by the first user; and comparing the one or more patterns to one or more predetermined patterns, where each of the one or more predetermined patterns is associated with the predetermined behavior, and where the one or more predetermined patterns were extracted from one or more knowledge bases and research papers. 7. The computer-implemented method of claim 1, wherein analyzing the plurality of words includes: identifying one or more predetermined keywords within the plurality of words by comparing the plurality of words to a predetermined keyword database; and determining a behavior associated with the one or more predetermined keywords identified within the plurality of words. 8. The computer-implemented method of claim 1, wherein: the geographical location associated with the first instance of recorded audio includes a location of the mobile device obtained utilizing a global positioning system (GPS) module, and the first instance of recorded audio includes results of monitoring all sounds within a vicinity of the mobile device, at a predetermined time associated with the timestamp, and at a predetermined location associated with the geographical location. 9. The computer-implemented method of claim 1, wherein the additional audio data includes sounds made within a vicinity of one or more monitoring devices other than the mobile device, and includes one or more verbal statements made by the second user when the second user is alone. 10. (canceled) 11. A computer program product for determining a behavior of a user utilizing audio data, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions executable by a processor to cause the processor to perform a method comprising: receiving from a mobile device, utilizing a processor at a cloud computing environment, audio data including a first instance of recorded audio, where the first instance of recorded audio includes sounds made within a vicinity of the mobile device and is associated with a timestamp and a geographical location; processing, utilizing the processor, the audio data within the cloud computing environment utilizing natural language processing to determine textual data representing a plurality of words and non-word sounds spoken by a first user to a second user within the audio data, as well as an identification of the first user and the second user; analyzing, utilizing the processor, the plurality of words within the cloud computing environment to determine a behavior of the first user, including identifying one or more matches between the plurality of words and non-word sounds spoken by the first user to the second user and a plurality of categorized words and non-word sounds that are associated with predetermined behavior; receiving from the mobile device or another mobile device, utilizing the processor at the cloud computing environment, additional audio data occurring before and after the first instance of recorded audio, the additional audio data including additional instances of recorded audio that are each associated with an additional timestamp; processing, utilizing the processor at the cloud computing environment, the additional audio data utilizing the natural language processing to determine additional textual data representing an additional plurality of words and non-word sounds spoken by the second user within the additional audio data; analyzing, utilizing the processor at the cloud computing environment, the additional plurality of words to determine a behavior of the second user before the first instance of recorded audio and after the first instance of recorded audio, including identifying one or more additional matches between the additional plurality of words and non-word sounds spoken by the second user and the plurality of categorized words and non-word sounds that are associated with the predetermined behavior; and determining, utilizing the processor at the cloud computing environment, an effect the behavior of the first user determined during the first instance of recorded audio has on the behavior of the second user over time by analyzing the behavior of the first user determined during the first instance of recorded audio together with the behavior of the second user determined during the additional instances of recorded audio that occur before and after the first instance of recorded audio, where the effect the behavior of the first user has on the behavior of the second user over time includes one or more changes in the behavior of the second user that are due to the behavior of the first user. 12. (canceled) 13. The computer program product of claim 11, wherein processing the audio data includes identifying, utilizing the processor, the first user as a source of the plurality of words spoken by the first user by comparing the audio data to one or more predetermined voiceprints. 14. The computer program product of claim 11, wherein the first instance of recorded audio includes all audio recorded at a predetermined location. 15. The computer program product of claim 11, wherein analyzing the plurality of words further includes: identifying, utilizing the processor, one or more patterns within the plurality of words spoken by the user; and comparing, utilizing the processor, the one or more patterns to one or more predetermined patterns, where each of the one or more predetermined patterns is associated with the predetermined behavior, and where the one or more predetermined patterns were extracted from one or more knowledge bases and research papers. 16. The computer program product of claim 11, wherein analyzing the plurality of words includes: identifying, utilizing the processor, one or more predetermined keywords within the plurality of words by comparing the plurality of words to a predetermined keyword database; and determining, utilizing the processor, a behavior associated with the one or more predetermined keywords identified within the plurality of words. 17. The computer program product of claim 11, wherein: the geographical location associated with the first instance of recorded audio includes a location of the mobile device obtained utilizing a global positioning system (GPS) module, and the first instance of recorded audio includes results of monitoring all sounds within a vicinity of the mobile device, at a predetermined time associated with the timestamp, and at a predetermined location associated with the geographical location. 18. The computer program product of claim 11, wherein the additional audio data includes sounds made within a vicinity of one or more monitoring devices other than the mobile device, and includes one or more verbal statements made by the second user when the second user is alone. 19. (canceled) 20. A computer-implemented method, comprising: determining, at a mobile device, that a current time and a current location of the mobile device meet a predetermined time and predetermined location; in response to determining that the current time and the current location of the mobile device meet a predetermined time and predetermined location, monitoring and recording as a first instance of recorded audio all audio data within a range of a microphone of the mobile device, where the first instance of recorded audio is associated with a timestamp and a geographical location; processing, at the mobile device, the audio data utilizing natural language processing to determine textual data representing a plurality of words and non-word sounds spoken by a first user to a second user within the audio data, as well as an identification of the first user and the second user; analyzing, at the mobile device, the plurality of words to determine a behavior of the first user, including identifying one or more matches between the plurality of words and non-word sounds spoken by the first user to the second user and a plurality of categorized words and non-word sounds that are associated with predetermined behavior; recording, at the mobile device, additional audio data occurring before and after the first instance of recorded audio, the additional audio data including additional instances of recorded audio that are each associated with an additional timestamp; processing, at the mobile device, the additional audio data utilizing the natural language processing to determine additional textual data representing an additional plurality of words and non-word sounds spoken by the second user within the additional audio data; analyzing, at the mobile device, the additional plurality of words to determine a behavior of the second user before the first instance of recorded audio and after the first instance of recorded audio, including identifying one or more additional matches between the additional plurality of words and non-word sounds spoken by the second user and the plurality of categorized words and non-word sounds that are associated with the predetermined behavior; and determining, at the mobile device, an effect the behavior of the first user determined during the first instance of recorded audio has on the behavior of the second user over time by analyzing the behavior of the first user determined during the first instance of recorded audio together with the behavior of the second user determined during the additional instances of recorded audio that occur before and after the first instance of recorded audio, where the effect the behavior of the first user has on the behavior of the second user over time includes one or more changes in the behavior of the second user that are due to the behavior of the first user. 21. The computer-implemented method of claim 1, wherein the additional instances of recorded audio that are each associated with the additional timestamp are also each associated with an additional geographical location.
2,600
10,207
10,207
15,320,387
2,688
A method for determining parking spaces, traffic participants ascertaining information about free parking spaces and communicating the information to a cloud computing system, the cloud computing system storing information about the free parking spaces in retrievable fashion in a parking space map, information about the provided parking space being visually presented on a display device of the traffic participant. A computer program and a free parking space assistance system, which are suitable in particular for carrying out the method, are also provided.
1-14. (canceled) 15. A method for determining parking spaces, comprising: ascertaining, by traffic participants, information about free parking spaces; communicating the information to a cloud computing system, the cloud computing system storing information about the free parking spaces in retrievable fashion in a parking space map, the cloud computing system ascertaining and providing a free parking space in at least one of: i) a local surrounding environment of the traffic participant, and ii) an environment of a known navigation destination; and visually presenting information about the provided parking space on a display device of the traffic participant. 16. The method as recited in claim 15, wherein the display device is a head-up display that is suitable for blending virtual objects into the field of view of the traffic participant. 17. The method as recited in claim 16, wherein the information about the provided parking space includes at least one of virtual boundary lines, and a virtual parking space symbol. 18. The method as recited in claim 15, wherein the traffic participants are vehicles equipped with environmental acquisition devices that, when traveling past free parking spaces, ascertain the free parking spaces, independently of whether parking space markings are present. 19. The method as recited in claim 15, wherein at least one of the location, size, and further meta-information, about parking spaces is ascertained. 20. The method as recited in claim 15, wherein the location of the free parking space is ascertained via a navigation system. 21. The method as recited claim 15, wherein when there is a request, free parking spaces are displayed to a traffic participant. 22. The method as recited in claim 15, wherein the cloud computing system ascertains and provides suitable free parking spaces at least one of: i) on the basis of vehicle dimensions of the traffic participant, ii) on the basis of desires of the traffic participant, and iii) on the basis of properties of the parking space. 23. The method as recited in claim 15, wherein, as a function of current conditions, the cloud computing system makes available or blocks specified surfaces as parking spaces. 24. A non-transitory computer-readable storage medium on which is stored a computer program for determining parking spaces, the computer program, when executed by a programmable device, causing the programmable device to perform: ascertaining, by traffic participants, information about free parking spaces; communicating the information to a cloud computing system, the cloud computing system storing information about the free parking spaces in retrievable fashion in a parking space map, the cloud computing system ascertaining and providing a free parking space in at least one of: i) a local surrounding environment of the traffic participant, and ii) an environment of a known navigation destination; and visually presenting information about the provided parking space on a display device of the traffic participant. 24. A free parking space assistance system, comprising: a cloud computing system that is set up to receive information about free parking spaces and to store the information in a parking space map, the cloud computing system further set up to provide the information about the free parking spaces on request, the cloud computing system designed to ascertain and provide free parking spaces at least one of: in a local environment of a traffic participant, and in an environment of a known navigation destination. 25. The free parking space assistance system as recited in claim 24, further comprising: at least one vehicle having a display device, information about the provided parking space being visually presented on a display device of the traffic participant. 26. The free parking space assistance system as recited in claim 24, further comprising: at least one vehicle having an environmental acquisition device and a communication unit, the environmental acquisition device being set up to ascertain free parking spaces in the surrounding environment of the vehicle, and the communication unit being set up to transmit ascertained information about the free parking spaces to the cloud computing system. 27. The free parking space assistance system as recited in claim 24, further comprising: a mobile unit, assigned to a traffic participant, that is set up to call information about the free parking spaces from the cloud computing system and to provide it to the traffic participant.
A method for determining parking spaces, traffic participants ascertaining information about free parking spaces and communicating the information to a cloud computing system, the cloud computing system storing information about the free parking spaces in retrievable fashion in a parking space map, information about the provided parking space being visually presented on a display device of the traffic participant. A computer program and a free parking space assistance system, which are suitable in particular for carrying out the method, are also provided.1-14. (canceled) 15. A method for determining parking spaces, comprising: ascertaining, by traffic participants, information about free parking spaces; communicating the information to a cloud computing system, the cloud computing system storing information about the free parking spaces in retrievable fashion in a parking space map, the cloud computing system ascertaining and providing a free parking space in at least one of: i) a local surrounding environment of the traffic participant, and ii) an environment of a known navigation destination; and visually presenting information about the provided parking space on a display device of the traffic participant. 16. The method as recited in claim 15, wherein the display device is a head-up display that is suitable for blending virtual objects into the field of view of the traffic participant. 17. The method as recited in claim 16, wherein the information about the provided parking space includes at least one of virtual boundary lines, and a virtual parking space symbol. 18. The method as recited in claim 15, wherein the traffic participants are vehicles equipped with environmental acquisition devices that, when traveling past free parking spaces, ascertain the free parking spaces, independently of whether parking space markings are present. 19. The method as recited in claim 15, wherein at least one of the location, size, and further meta-information, about parking spaces is ascertained. 20. The method as recited in claim 15, wherein the location of the free parking space is ascertained via a navigation system. 21. The method as recited claim 15, wherein when there is a request, free parking spaces are displayed to a traffic participant. 22. The method as recited in claim 15, wherein the cloud computing system ascertains and provides suitable free parking spaces at least one of: i) on the basis of vehicle dimensions of the traffic participant, ii) on the basis of desires of the traffic participant, and iii) on the basis of properties of the parking space. 23. The method as recited in claim 15, wherein, as a function of current conditions, the cloud computing system makes available or blocks specified surfaces as parking spaces. 24. A non-transitory computer-readable storage medium on which is stored a computer program for determining parking spaces, the computer program, when executed by a programmable device, causing the programmable device to perform: ascertaining, by traffic participants, information about free parking spaces; communicating the information to a cloud computing system, the cloud computing system storing information about the free parking spaces in retrievable fashion in a parking space map, the cloud computing system ascertaining and providing a free parking space in at least one of: i) a local surrounding environment of the traffic participant, and ii) an environment of a known navigation destination; and visually presenting information about the provided parking space on a display device of the traffic participant. 24. A free parking space assistance system, comprising: a cloud computing system that is set up to receive information about free parking spaces and to store the information in a parking space map, the cloud computing system further set up to provide the information about the free parking spaces on request, the cloud computing system designed to ascertain and provide free parking spaces at least one of: in a local environment of a traffic participant, and in an environment of a known navigation destination. 25. The free parking space assistance system as recited in claim 24, further comprising: at least one vehicle having a display device, information about the provided parking space being visually presented on a display device of the traffic participant. 26. The free parking space assistance system as recited in claim 24, further comprising: at least one vehicle having an environmental acquisition device and a communication unit, the environmental acquisition device being set up to ascertain free parking spaces in the surrounding environment of the vehicle, and the communication unit being set up to transmit ascertained information about the free parking spaces to the cloud computing system. 27. The free parking space assistance system as recited in claim 24, further comprising: a mobile unit, assigned to a traffic participant, that is set up to call information about the free parking spaces from the cloud computing system and to provide it to the traffic participant.
2,600
10,208
10,208
14,072,687
2,641
Techniques are disclosed, for intelligent management of multiple communications links. One communications link can be used to bring up another communications link with little or no user input. Selective enablement/disablement of one or more of the communications links is based on system needs and, other criteria. Utilizing one more-secure communications link to improve security on another less-secure communications link between the same devices is also contemplated.
1. A method comprising: detecting, by a first device, a wireless signal originating from a second device, the wireless signal indicating availability of a first communications link, the first communications link being a wireless communications link; sending, by the first device, first communications link setup information corresponding to establishing the first communications link between the first device and the second device; sending, by the first device, a first command over the first communications link, the first command configured to cause the second device to begin advertising availability of a second communications link, the second communications link being another wireless communications link having different characteristics than the first communications link; receiving, by the first device over the first communications link, information corresponding to the second communications link; and using, by the first device, the information corresponding to the second communications link to facilitate establishing the second communications link. 2. The method of claim 1, wherein the information corresponding the second communications link includes an address for the second device. 3. The method of claim 1, further comprising: receiving, by the first device over the first communications link, a first security code originating from the second device; receiving, by the first device over the second communications link, a second security code originating from the second device; and verifying the second communications link using the first and second security codes. 4. The method of claim 1, further comprising: receiving, by the first device over the first communications link, a security code originating from the second device; and sending, by the first device over the second communications link, verification information corresponding to the security code. 5. The method of claim 1, further comprising, disabling, by the first device, one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled. 6. The method of claim 5, wherein the disabling further comprises sending, by the first device, a second command over one of the first communications link and the second communications link. 7. The method of claim 6, further comprising sending, by the first device, a third command over the maintained communications link, the third command configured to re-establish the disabled communications link. 8. The method of claim 5, further comprising, prior to the disabling, determining identity of the maintained communications link based on an application running on the first device. 9. The method of claim 5, further comprising, or to the disabling, determining identity of the maintained communications link based on parameters stored on the first device. 10. The method of claim 5, further comprising, prior to the disabling, determining identity of the maintained communications link based on data bandwidth needs. 11. The method of claim 1, further comprising receiving, by the first device, a second command over one of the first communications link and the second communications link, the second command configured to disable one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled as a result of the second command. 12. The method of claim 1, further comprising using, by the first device, security information configured to provide additional security with respect to communications between the first device and the second device over the second communications link, wherein the security information is transferred over the first communications link. 13. The method of claim 1, wherein the first communications link corresponds to a Classic Bluetooth communications link, and the second communications link corresponds to a Bluetooth Low Energy communications link. 14. A method comprising: detecting, at a first device, a wireless signal originating from a second device, the wireless signal indicating availability of a first communications link, the first communications link being a wireless communications link; sending, by the first device, first communications link setup information corresponding to establishing the first communications link between the first device and the second device; receiving, by the first device, a first command over the first communications link; in response to the first command, advertising, by the first device, availability of a second communications link, the second communications link being another wireless communications link having different characteristics than the first communications link; and in response to the first device receiving a response to the advertising, using, by the first device, setup information included in the response to the advertising to facilitate establishing the second communications link. 15. The method of claim 14, further comprising, in response to the first device not receiving a response to the advertising within a predetermined period of time, terminating the advertising. 16. The method of claim 14, further comprising: generating a first security code by the first device; sending, by the first device, the first security code over the first communications link; generating a second security code by the first device; and sending, by the first device, the second security code over the second communications link. 17. The method of claim 14, further comprising: generating a security code by the firs device; sending the security code over the first communications link; receiving a responsive code over the second communications link; and comparing the responsive code with the security code. 18. The method of claim 17 further comprising: in response to a determination that the responsive code does not correspond to the security code, terminating the second communications link. 19. The method of claim 17 further comprising: in response to a determination that the responsive code does not correspond to the security code, providing an alert corresponding to the determination. 20. The method of claim 14, further comprising disabling, by the first device, one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled. 21. The method of claim 20, wherein the disabling further comprises sending, by the first device, a second command over one of the first communications link and the second communications link. 22. The method of claim 21, further comprising sending, by the first device, a third command over the maintained communications link, the third command configured to re-establish the disabled communications link. 23. The method of claim 20, further comprising, prior to the disabling, determining identity of the maintained communications link based on an application running on the first device. 24. The method of claim 20, further comprising, prior to the disabling, determining identity of the maintained communications link based on parameters stored on the first device. 25. The method of claim 20, further comprising, prior to the disabling, determining identity of the maintained communications link based on data bandwidth needs. 26. The method of claim 14, further comprising receiving by the first device, a second command over one of the first communications link and the second communications link, the second command configured to disable one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled as a result of the second command. 27. The method of claim 14, further comprising using, by the first device, security information configured to provide additional security with respect to communications between the first device and the second device over the second communications link, wherein the security information is transferred over the first communications link. 28. The method of claim 14, wherein the first communications link corresponds to a Classic Bluetooth communications link, and the second communications link corresponds to a Bluetooth Low Energy communications link.
Techniques are disclosed, for intelligent management of multiple communications links. One communications link can be used to bring up another communications link with little or no user input. Selective enablement/disablement of one or more of the communications links is based on system needs and, other criteria. Utilizing one more-secure communications link to improve security on another less-secure communications link between the same devices is also contemplated.1. A method comprising: detecting, by a first device, a wireless signal originating from a second device, the wireless signal indicating availability of a first communications link, the first communications link being a wireless communications link; sending, by the first device, first communications link setup information corresponding to establishing the first communications link between the first device and the second device; sending, by the first device, a first command over the first communications link, the first command configured to cause the second device to begin advertising availability of a second communications link, the second communications link being another wireless communications link having different characteristics than the first communications link; receiving, by the first device over the first communications link, information corresponding to the second communications link; and using, by the first device, the information corresponding to the second communications link to facilitate establishing the second communications link. 2. The method of claim 1, wherein the information corresponding the second communications link includes an address for the second device. 3. The method of claim 1, further comprising: receiving, by the first device over the first communications link, a first security code originating from the second device; receiving, by the first device over the second communications link, a second security code originating from the second device; and verifying the second communications link using the first and second security codes. 4. The method of claim 1, further comprising: receiving, by the first device over the first communications link, a security code originating from the second device; and sending, by the first device over the second communications link, verification information corresponding to the security code. 5. The method of claim 1, further comprising, disabling, by the first device, one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled. 6. The method of claim 5, wherein the disabling further comprises sending, by the first device, a second command over one of the first communications link and the second communications link. 7. The method of claim 6, further comprising sending, by the first device, a third command over the maintained communications link, the third command configured to re-establish the disabled communications link. 8. The method of claim 5, further comprising, prior to the disabling, determining identity of the maintained communications link based on an application running on the first device. 9. The method of claim 5, further comprising, or to the disabling, determining identity of the maintained communications link based on parameters stored on the first device. 10. The method of claim 5, further comprising, prior to the disabling, determining identity of the maintained communications link based on data bandwidth needs. 11. The method of claim 1, further comprising receiving, by the first device, a second command over one of the first communications link and the second communications link, the second command configured to disable one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled as a result of the second command. 12. The method of claim 1, further comprising using, by the first device, security information configured to provide additional security with respect to communications between the first device and the second device over the second communications link, wherein the security information is transferred over the first communications link. 13. The method of claim 1, wherein the first communications link corresponds to a Classic Bluetooth communications link, and the second communications link corresponds to a Bluetooth Low Energy communications link. 14. A method comprising: detecting, at a first device, a wireless signal originating from a second device, the wireless signal indicating availability of a first communications link, the first communications link being a wireless communications link; sending, by the first device, first communications link setup information corresponding to establishing the first communications link between the first device and the second device; receiving, by the first device, a first command over the first communications link; in response to the first command, advertising, by the first device, availability of a second communications link, the second communications link being another wireless communications link having different characteristics than the first communications link; and in response to the first device receiving a response to the advertising, using, by the first device, setup information included in the response to the advertising to facilitate establishing the second communications link. 15. The method of claim 14, further comprising, in response to the first device not receiving a response to the advertising within a predetermined period of time, terminating the advertising. 16. The method of claim 14, further comprising: generating a first security code by the first device; sending, by the first device, the first security code over the first communications link; generating a second security code by the first device; and sending, by the first device, the second security code over the second communications link. 17. The method of claim 14, further comprising: generating a security code by the firs device; sending the security code over the first communications link; receiving a responsive code over the second communications link; and comparing the responsive code with the security code. 18. The method of claim 17 further comprising: in response to a determination that the responsive code does not correspond to the security code, terminating the second communications link. 19. The method of claim 17 further comprising: in response to a determination that the responsive code does not correspond to the security code, providing an alert corresponding to the determination. 20. The method of claim 14, further comprising disabling, by the first device, one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled. 21. The method of claim 20, wherein the disabling further comprises sending, by the first device, a second command over one of the first communications link and the second communications link. 22. The method of claim 21, further comprising sending, by the first device, a third command over the maintained communications link, the third command configured to re-establish the disabled communications link. 23. The method of claim 20, further comprising, prior to the disabling, determining identity of the maintained communications link based on an application running on the first device. 24. The method of claim 20, further comprising, prior to the disabling, determining identity of the maintained communications link based on parameters stored on the first device. 25. The method of claim 20, further comprising, prior to the disabling, determining identity of the maintained communications link based on data bandwidth needs. 26. The method of claim 14, further comprising receiving by the first device, a second command over one of the first communications link and the second communications link, the second command configured to disable one of the first communications link and the second communications link to produce a disabled communications link, wherein a maintained communications link corresponds to one of the first communications link and the second communications link not disabled as a result of the second command. 27. The method of claim 14, further comprising using, by the first device, security information configured to provide additional security with respect to communications between the first device and the second device over the second communications link, wherein the security information is transferred over the first communications link. 28. The method of claim 14, wherein the first communications link corresponds to a Classic Bluetooth communications link, and the second communications link corresponds to a Bluetooth Low Energy communications link.
2,600
10,209
10,209
16,170,150
2,685
A circuit breaker apparatus may include a housing, a circuit inside the housing for protecting the conductors and the load of the circuit, a display attached to outside of the housing, a controller, and a power control device. The display may be an electronically-alterable display that does not require power in situations other than changing its state. The power control device may provide power to the controller and the display when the apparatus is being tampered with. When the controller is powered, it may cause the power control device to cause the display to change from a state that indicates that the apparatus is authenticated to another state that indicates that the apparatus has been tampered with. The power control device may include a battery and a switch, or a power harvester, which can be configured to provide power to the controller when the apparatus is being tampered with.
1. A circuit breaker apparatus comprising: a housing; a circuit disposed in the housing and configured to connect a power line to a load via one or more conductors and provide circuit protection for the one or more conductors and the load; a display that is operable to change from a first state to a second state, wherein: the first state indicates that the apparatus is authenticated and the second state indicates that the apparatus has been tampered with, and power is not required to maintain the first state or the second state once displayed; a controller; and a power control device electrically coupled to the controller; wherein: the display is configured to be in the first state when the apparatus is authenticated and has not been tampered with, the power control device is configured to provide power to the controller when the apparatus is tampered with, and the controller is configured to, when powered, send a signal to cause the display to change from the first state to the second state. 2. The apparatus of claim 1, wherein the power control device comprises an energy storage device that becomes sufficiently charged to power the controller when the apparatus is being tampered with. 3. The apparatus of claim 2, wherein the energy storage device is a capacitor. 4. The apparatus of claim 2, further comprising a battery and a switch, wherein the switch is configured to connect the battery to the power control device when the apparatus is being tampered with. 5. The apparatus of claim 2, further comprising a power harvester disposed inside the apparatus and configured to charge the energy storage device when the apparatus is being tampered with. 6. The apparatus of claim 5, wherein: the housing comprises a plurality of sections; and the power harvester is configured to charge the energy storage device when the housing is opened between any two of the plurality of sections of the housing. 7. The apparatus of claim 1, wherein the first state of the display comprises a state in which a two-dimensional barcode containing product information is displayed. 8. The apparatus of claim 7, wherein the second state of the display comprises a state in which: a message displayed in the first state is erased; a blank screen is displayed; or a message indicating that the apparatus has been tampered with is displayed. 9. The apparatus of claim 1 further comprising a first RFID tag attached to the apparatus, the first RFID tag containing authentication data therein, and wherein the controller is further configured to initialize the display to the first state by: retrieving the authentication data from the first RFID tag; retrieving a key stored external to the apparatus; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is valid, causing the power control device to initialize the display to the first state. 10. The apparatus of claim 9, wherein the controller is configured to determine whether the apparatus has been tampered with before causing the power control device to cause the display to change from the first state to the second state, by: retrieving the authentication data from the first RFID tag; retrieving the key stored external to the apparatus; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is invalid, determining that the apparatus has been tampered with. 11. The apparatus of claim 9, further comprising a second RFID tag containing authentication data therein and attached to the apparatus, and wherein the controller is further configured to initialize the display to the first state by additionally: retrieving the authentication data from the second RFID tag; using the key to determine whether the authentication data retrieved from the second RFID tag is valid; and upon determining that both the authentication data retrieved from the first RFID tag is valid and the authentication data retrieved from the second RFID tag is valid, causing the display to initialize to the first state. 12. The apparatus of claim 11, wherein the first RFID tag is attached to a cover of the housing and the second RFID tag is attached to a base of the housing. 13. The apparatus of claim 9, wherein: the first RFID tag is an active RFID chip; and the power control device is further configured to power the first RFID tag when the apparatus is being tampered with. 14. A method of indicating integrity of a circuit breaker, the method comprising: by a display of the circuit breaker, being in a first state that indicates the circuit breaker is authenticated and has not been tampered with and that does not require power to maintain the first state once displayed; by a power control device of the circuit breaker, detecting that the circuit breaker is being tampered with, and in response to detecting that the circuit breaker is being tampered with providing power to a controller of the circuit breaker; by the controller of the circuit breaker, when powered, sending a signal to cause the display to change from the first state to a second state, wherein: the second state indicates that the circuit breaker has been tampered with, and power to the display is not required to maintain the second state once displayed. 15. The method of claim 14, further comprising: by the controller, initializing the display to the first state by: retrieving authentication data from a first RFID tag attached to the circuit breaker; retrieving a key stored external to the circuit breaker; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is valid, causing the power control device to initialize the display to the first state. 16. The method of claim 15, further comprising: by the controller, determining whether the circuit breaker has been tampered with before causing the power control device to cause the display to change from the first state to the second state, by: retrieving the authentication data from the first RFID tag; retrieving the key stored external to the circuit breaker; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is invalid, determining that the circuit breaker has been tampered with. 17. The method of claim 15, further comprising: by the controller, initializing the display to the first state by additionally: retrieving authentication data from a second RFID tag attached to the circuit breaker; using the key to determine whether the authentication data retrieved from the second RFID tag is valid; and upon determining that both the authentication data retrieved from the first RFID tag is valid and the authentication data retrieved from the second RFID tag is valid, causing the display to initialize to the first state. 18. The method of claim 16, further comprising, by the power control device, powering the first RFID tag when the circuit breaker is being tampered with. 19. The method of claim 14, wherein the first state of the display comprises a state in which a two-dimensional barcode containing product information is displayed. 20. The method of claim 14, wherein the change from the first state to the second state comprises: erasing a message displayed in the first state; displaying a blank screen; or displaying a message indicating that the circuit breaker has been tampered with.
A circuit breaker apparatus may include a housing, a circuit inside the housing for protecting the conductors and the load of the circuit, a display attached to outside of the housing, a controller, and a power control device. The display may be an electronically-alterable display that does not require power in situations other than changing its state. The power control device may provide power to the controller and the display when the apparatus is being tampered with. When the controller is powered, it may cause the power control device to cause the display to change from a state that indicates that the apparatus is authenticated to another state that indicates that the apparatus has been tampered with. The power control device may include a battery and a switch, or a power harvester, which can be configured to provide power to the controller when the apparatus is being tampered with.1. A circuit breaker apparatus comprising: a housing; a circuit disposed in the housing and configured to connect a power line to a load via one or more conductors and provide circuit protection for the one or more conductors and the load; a display that is operable to change from a first state to a second state, wherein: the first state indicates that the apparatus is authenticated and the second state indicates that the apparatus has been tampered with, and power is not required to maintain the first state or the second state once displayed; a controller; and a power control device electrically coupled to the controller; wherein: the display is configured to be in the first state when the apparatus is authenticated and has not been tampered with, the power control device is configured to provide power to the controller when the apparatus is tampered with, and the controller is configured to, when powered, send a signal to cause the display to change from the first state to the second state. 2. The apparatus of claim 1, wherein the power control device comprises an energy storage device that becomes sufficiently charged to power the controller when the apparatus is being tampered with. 3. The apparatus of claim 2, wherein the energy storage device is a capacitor. 4. The apparatus of claim 2, further comprising a battery and a switch, wherein the switch is configured to connect the battery to the power control device when the apparatus is being tampered with. 5. The apparatus of claim 2, further comprising a power harvester disposed inside the apparatus and configured to charge the energy storage device when the apparatus is being tampered with. 6. The apparatus of claim 5, wherein: the housing comprises a plurality of sections; and the power harvester is configured to charge the energy storage device when the housing is opened between any two of the plurality of sections of the housing. 7. The apparatus of claim 1, wherein the first state of the display comprises a state in which a two-dimensional barcode containing product information is displayed. 8. The apparatus of claim 7, wherein the second state of the display comprises a state in which: a message displayed in the first state is erased; a blank screen is displayed; or a message indicating that the apparatus has been tampered with is displayed. 9. The apparatus of claim 1 further comprising a first RFID tag attached to the apparatus, the first RFID tag containing authentication data therein, and wherein the controller is further configured to initialize the display to the first state by: retrieving the authentication data from the first RFID tag; retrieving a key stored external to the apparatus; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is valid, causing the power control device to initialize the display to the first state. 10. The apparatus of claim 9, wherein the controller is configured to determine whether the apparatus has been tampered with before causing the power control device to cause the display to change from the first state to the second state, by: retrieving the authentication data from the first RFID tag; retrieving the key stored external to the apparatus; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is invalid, determining that the apparatus has been tampered with. 11. The apparatus of claim 9, further comprising a second RFID tag containing authentication data therein and attached to the apparatus, and wherein the controller is further configured to initialize the display to the first state by additionally: retrieving the authentication data from the second RFID tag; using the key to determine whether the authentication data retrieved from the second RFID tag is valid; and upon determining that both the authentication data retrieved from the first RFID tag is valid and the authentication data retrieved from the second RFID tag is valid, causing the display to initialize to the first state. 12. The apparatus of claim 11, wherein the first RFID tag is attached to a cover of the housing and the second RFID tag is attached to a base of the housing. 13. The apparatus of claim 9, wherein: the first RFID tag is an active RFID chip; and the power control device is further configured to power the first RFID tag when the apparatus is being tampered with. 14. A method of indicating integrity of a circuit breaker, the method comprising: by a display of the circuit breaker, being in a first state that indicates the circuit breaker is authenticated and has not been tampered with and that does not require power to maintain the first state once displayed; by a power control device of the circuit breaker, detecting that the circuit breaker is being tampered with, and in response to detecting that the circuit breaker is being tampered with providing power to a controller of the circuit breaker; by the controller of the circuit breaker, when powered, sending a signal to cause the display to change from the first state to a second state, wherein: the second state indicates that the circuit breaker has been tampered with, and power to the display is not required to maintain the second state once displayed. 15. The method of claim 14, further comprising: by the controller, initializing the display to the first state by: retrieving authentication data from a first RFID tag attached to the circuit breaker; retrieving a key stored external to the circuit breaker; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is valid, causing the power control device to initialize the display to the first state. 16. The method of claim 15, further comprising: by the controller, determining whether the circuit breaker has been tampered with before causing the power control device to cause the display to change from the first state to the second state, by: retrieving the authentication data from the first RFID tag; retrieving the key stored external to the circuit breaker; using the key to determine whether the authentication data retrieved from the first RFID tag is valid; and upon determining that the authentication data retrieved from the first RFID tag is invalid, determining that the circuit breaker has been tampered with. 17. The method of claim 15, further comprising: by the controller, initializing the display to the first state by additionally: retrieving authentication data from a second RFID tag attached to the circuit breaker; using the key to determine whether the authentication data retrieved from the second RFID tag is valid; and upon determining that both the authentication data retrieved from the first RFID tag is valid and the authentication data retrieved from the second RFID tag is valid, causing the display to initialize to the first state. 18. The method of claim 16, further comprising, by the power control device, powering the first RFID tag when the circuit breaker is being tampered with. 19. The method of claim 14, wherein the first state of the display comprises a state in which a two-dimensional barcode containing product information is displayed. 20. The method of claim 14, wherein the change from the first state to the second state comprises: erasing a message displayed in the first state; displaying a blank screen; or displaying a message indicating that the circuit breaker has been tampered with.
2,600
10,210
10,210
15,001,595
2,621
A display device is disclosed. In one aspect, the display device includes a substrate, a plurality of pixels formed over the substrate and a plurality of signal lines formed over the substrate and connected to the pixels. The signal lines include a plurality of data lines formed over the substrate and a first crack sensing line connected to a first data line. The first crack sensing line is divided into first and second sections and the first section has a width that is greater than that of the second section.
1. A display device, comprising: a substrate including a display area and a peripheral area neighboring the display area; a plurality of pixels formed over the substrate in the display area; and a plurality of signal lines formed over the substrate and connected to the pixels, wherein the signal lines include: a plurality of data lines formed over the substrate, and a first crack sensing line formed in the peripheral area and connected to a first data line, wherein the first crack sensing line is divided into first and second sections, and wherein the first section has a width that is greater than that of the second section. 2. The display device of claim 1, wherein the first section has a polygonal shape. 3. The display device of claim 2, wherein the first section has a substantially quadrangular shape or a substantially rhombus shape. 4. The display device of claim 2, wherein the first and second sections of the first crack sensing line overlap each other with an insulating layer interposed therebetween, and wherein the first and second sections are connected to each other through a first hole defined in the insulating layer. 5. The display device of claim 2, wherein the first crack sensing line extends in a first direction, wherein the first crack sensing line includes: i) a first oblique section that forms a first angle with the first direction and ii) a second oblique section forming a second angle with the first direction, wherein the first and second oblique sections are connected to each other via a crossing section, wherein the first and second oblique portions forms the first section, and wherein the crossing section forms the second section. 6. The display device of claim 5, further comprising a plurality of second holes formed in the insulating layer. 7. The display device of claim 1, wherein: the signal lines include a first test signal line and a second test signal line formed over the substrate in the peripheral area, the first crack sensing line is connected to the first data line through a first connection portion and a second connection portion, the first crack sensing line forms a loop between the first and second connection portions, and the data lines are connected to: i) the first test signal line through a plurality of first switching elements and ii) the second test signal line through a plurality of second switching elements. 8. The display device of claim 7, wherein the first crack sensing line is connected between the second test signal line and one of the second switching elements. 9. The display device of claim 8, further comprising: i) a first test gate line formed in the peripheral area of the substrate and connected to the first switching elements, and ii) a second test gate line connected to the second switching elements. 10. The display device of claim 9, wherein: the data lines are configured to receive a first test signal from the first test signal line in response to the first test gate line being applied with a first gate-on voltage, and the data lines are further configured to receive a second test signal from the second test signal line in response to the second test gate line being applied with a second gate-on voltage. 11. The display device of claim 10, wherein the second test gate line is further configured to receive the second gate-on voltage after the first test gate line the first gate-on voltage receives the first gate-on voltage, and wherein the magnitude of the first test voltage and the magnitude of the second test voltage are different from each other. 12. A display device, comprising: a substrate including a display area and a peripheral area neighboring the display area; a plurality of pixels formed over the substrate in the display area; and a plurality of signal lines formed over the substrate and connected to the pixels, wherein the signal lines include: a plurality of data lines formed over the substrate, and a first crack sensing line formed in the peripheral area and connected to a first data line, wherein the first crack sensing line is divided into first and second sections, and wherein the first section extends in a first direction and the second section extends in a second direction different from the first direction. 13. The display device of claim 12, wherein the data lines extend in a first direction, wherein the first and second sections are connected to each other in a crossing region, wherein the first section forms a first angle with the first direction, wherein the second section forms a second angle with the first direction, and wherein the first and second angles less than or greater than about 90 degrees. 14. The display device of claim 12, wherein the data lines extend in a first direction, wherein the first and second sections are connected to each other in a crossing region, wherein the first section extends in the first direction, and wherein the second section extends in a direction substantially perpendicular to the first direction. 15. The display device of claim 14, further comprising a plurality of holes formed in an insulating layer that overlaps the first crack sensing line. 16. The display device of claim 12, wherein: the signal lines further include a first test signal line and a second test signal line formed over the substrate in the peripheral area, the first crack sensing line is connected to the first data line through a first connection portion and a second connection portion, the first crack sensing line forms a loop between the first and second connection portions, and the data lines are connected to: i) the first test signal line through a plurality of first switching elements and ii) the second test signal line through a plurality of second switching elements. 17. The display device of claim 16, wherein the first crack sensing line is connected between the second test signal line and the second switching elements. 18. The display device of claim 17, further comprising a first test gate line connected to the first switching elements and a second test gate line connected to the second switching elements, and wherein the first test gate line is formed in the peripheral area. 19. The display device of claim 18, wherein: the data lines are configured to receive a first test signal from the first test signal line in response to the first test gate line being applied with a first gate-on voltage, and the data lines are further configure to receive a first test signal from the second test signal line in response to the second test gate line being applied with a second gate-on voltage. 20. The display device of claim 19, wherein the second test gate line is further configured to receive the second gate-on voltage after the first test gate line the first gate-on voltage receives the first gate-on voltage, and wherein the magnitude of the first test voltage and the magnitude of the second test voltage are different from each other.
A display device is disclosed. In one aspect, the display device includes a substrate, a plurality of pixels formed over the substrate and a plurality of signal lines formed over the substrate and connected to the pixels. The signal lines include a plurality of data lines formed over the substrate and a first crack sensing line connected to a first data line. The first crack sensing line is divided into first and second sections and the first section has a width that is greater than that of the second section.1. A display device, comprising: a substrate including a display area and a peripheral area neighboring the display area; a plurality of pixels formed over the substrate in the display area; and a plurality of signal lines formed over the substrate and connected to the pixels, wherein the signal lines include: a plurality of data lines formed over the substrate, and a first crack sensing line formed in the peripheral area and connected to a first data line, wherein the first crack sensing line is divided into first and second sections, and wherein the first section has a width that is greater than that of the second section. 2. The display device of claim 1, wherein the first section has a polygonal shape. 3. The display device of claim 2, wherein the first section has a substantially quadrangular shape or a substantially rhombus shape. 4. The display device of claim 2, wherein the first and second sections of the first crack sensing line overlap each other with an insulating layer interposed therebetween, and wherein the first and second sections are connected to each other through a first hole defined in the insulating layer. 5. The display device of claim 2, wherein the first crack sensing line extends in a first direction, wherein the first crack sensing line includes: i) a first oblique section that forms a first angle with the first direction and ii) a second oblique section forming a second angle with the first direction, wherein the first and second oblique sections are connected to each other via a crossing section, wherein the first and second oblique portions forms the first section, and wherein the crossing section forms the second section. 6. The display device of claim 5, further comprising a plurality of second holes formed in the insulating layer. 7. The display device of claim 1, wherein: the signal lines include a first test signal line and a second test signal line formed over the substrate in the peripheral area, the first crack sensing line is connected to the first data line through a first connection portion and a second connection portion, the first crack sensing line forms a loop between the first and second connection portions, and the data lines are connected to: i) the first test signal line through a plurality of first switching elements and ii) the second test signal line through a plurality of second switching elements. 8. The display device of claim 7, wherein the first crack sensing line is connected between the second test signal line and one of the second switching elements. 9. The display device of claim 8, further comprising: i) a first test gate line formed in the peripheral area of the substrate and connected to the first switching elements, and ii) a second test gate line connected to the second switching elements. 10. The display device of claim 9, wherein: the data lines are configured to receive a first test signal from the first test signal line in response to the first test gate line being applied with a first gate-on voltage, and the data lines are further configured to receive a second test signal from the second test signal line in response to the second test gate line being applied with a second gate-on voltage. 11. The display device of claim 10, wherein the second test gate line is further configured to receive the second gate-on voltage after the first test gate line the first gate-on voltage receives the first gate-on voltage, and wherein the magnitude of the first test voltage and the magnitude of the second test voltage are different from each other. 12. A display device, comprising: a substrate including a display area and a peripheral area neighboring the display area; a plurality of pixels formed over the substrate in the display area; and a plurality of signal lines formed over the substrate and connected to the pixels, wherein the signal lines include: a plurality of data lines formed over the substrate, and a first crack sensing line formed in the peripheral area and connected to a first data line, wherein the first crack sensing line is divided into first and second sections, and wherein the first section extends in a first direction and the second section extends in a second direction different from the first direction. 13. The display device of claim 12, wherein the data lines extend in a first direction, wherein the first and second sections are connected to each other in a crossing region, wherein the first section forms a first angle with the first direction, wherein the second section forms a second angle with the first direction, and wherein the first and second angles less than or greater than about 90 degrees. 14. The display device of claim 12, wherein the data lines extend in a first direction, wherein the first and second sections are connected to each other in a crossing region, wherein the first section extends in the first direction, and wherein the second section extends in a direction substantially perpendicular to the first direction. 15. The display device of claim 14, further comprising a plurality of holes formed in an insulating layer that overlaps the first crack sensing line. 16. The display device of claim 12, wherein: the signal lines further include a first test signal line and a second test signal line formed over the substrate in the peripheral area, the first crack sensing line is connected to the first data line through a first connection portion and a second connection portion, the first crack sensing line forms a loop between the first and second connection portions, and the data lines are connected to: i) the first test signal line through a plurality of first switching elements and ii) the second test signal line through a plurality of second switching elements. 17. The display device of claim 16, wherein the first crack sensing line is connected between the second test signal line and the second switching elements. 18. The display device of claim 17, further comprising a first test gate line connected to the first switching elements and a second test gate line connected to the second switching elements, and wherein the first test gate line is formed in the peripheral area. 19. The display device of claim 18, wherein: the data lines are configured to receive a first test signal from the first test signal line in response to the first test gate line being applied with a first gate-on voltage, and the data lines are further configure to receive a first test signal from the second test signal line in response to the second test gate line being applied with a second gate-on voltage. 20. The display device of claim 19, wherein the second test gate line is further configured to receive the second gate-on voltage after the first test gate line the first gate-on voltage receives the first gate-on voltage, and wherein the magnitude of the first test voltage and the magnitude of the second test voltage are different from each other.
2,600
10,211
10,211
13,395,059
2,687
An apparatus and method for tracking an object includes a transmitter that generates a signal and a receiver that receives the signal generated by the transmitter. The receiver generates an alert signal when a distance between the transmitter and receiver is greater than a predetermined value. The receiver is retained in a retaining element, such as a wrist band, while the transmitter is secured to an object. The receiver is programmable to generate the alert signal only when the predetermined distance has been exceeded. The retaining element optionally includes multiple retaining features so that multiple receivers may be retained in a single retaining element. Each receiver is programmable to communicate only with one corresponding transmitter by assigning each transmitter/receiver pair with a unique identification code. The unique identification code expires after use so that each transmitter may only communicate with a single receiver.
1. A method for tracking an object, comprising: automatically generating a unique identification code; assigning the unique identification code to a transmitter and to a receiver; enabling receipt by the receiver of a signal generated by the transmitter; and generating an alert signal when a distance between the transmitter and the receiver is greater than a predetermined value. 2. The method of claim 1, wherein the signal generated by the transmitter has a variable signal strength associated with the distance between the transmitter and the receiver, wherein the predetermined value is associated with a corresponding signal strength, and wherein the alert signal is generated when the variable signal strength falls below the corresponding signal strength for the predetermined value. 3. A method of claim 1, further comprising manually generating a user input code and assigning the user input code to the transmitter and the receiver. 4. The method of claim 1, wherein assigning the unique identification code to the transmitter and to the receiver comprises coupling the transmitter and the receiver with a processing device. 5. The method of claim 1, further comprising removing the unique identification code from a list of available identification codes after assigning the unique identification code to the transmitter and the receiver. 6. The method claim 1, further comprising restricting the receiver to communicate only with the transmitter having the unique identifier. 7. The method of claim 1, further comprising setting the predetermined value to a number in the range of 1 to 5 feet. 8. The method of claim 1, further comprising setting the predetermined value to a number in the range of 100 to 200 feet. 9. The method of claim 1, further comprising: automatically generating an additional unique identification code; assigning the additional unique identification code to an additional transmitter and to an additional receiver; enabling receipt by the additional receiver of a signal generated by the additional transmitter; and generating an alert signal when a distance between the additional transmitter and the additional receiver is greater than an additional predetermined value. 10. The method of claim 9, further comprising retaining the first and second receivers in a retaining element, the retaining element having multiple receiver retaining slots. 11. The method of claim 10, further comprising altering at least one of the predetermined value and the additional predetermined value by actuating a button corresponding to the receiver or the additional receiver. 12. The method of claim 1, wherein the signal generated by the transmitter includes global positioning information. 13. A system for tracking an object, comprising: a receiver capable of receiving programming instructions from a processing device; a transmitter capable of receiving programming instructions from the processing device and capable of being secured to the object; a selectable coupling configured to facilitate communication between the processing device and the receiver and transmitter; and a retaining device having a least one slot for retaining the receiver; wherein the receiver and the transmitter are configured to receive a unique identifier code; wherein the transmitter is configured to generate a signal; wherein the receiver configured to receive the signal generated by the transmitter; and wherein the receiver is configured to generate an alert signal when a distance between the transmitter and the receiver is greater than a predetermined value. 14. The system of claim 13, wherein the signal generated by the transmitter has a variable signal strength associated with the distance between the transmitter and the receiver, wherein the predetermined value is associated with a corresponding signal strength, and wherein the alert signal is generated when the variable signal strength falls below the corresponding signal strength for the predetermined value. 15. A system of claim 13, wherein the receiver and transmitter is capable of receiving a user input code. 16. The system of claim 13, wherein the unique identification code is transmitted to the transmitter and to the receiver via the selectable coupling. 17. The system of claim 13, wherein the processing device is configured to remove the unique identification code from a list of available identification codes after assigning the unique identification code to the transmitter and the receiver. 18. The system claim 13, wherein the receiver is configured to communicate only with the transmitter having the unique identifier. 19. The system of claim 13, wherein the predetermined value to a number in the range of 1 to 5 feet. 20. The system of claim 13, wherein the predetermined value to a number in the range of 100 to 200 feet. 21. The system of claim 13, further comprising: an additional receiver capable of receiving programming instructions from the processing device; and an additional transmitter capable of receiving programming instructions from the processing device and capable of being secured to the object, wherein the selectable coupling is configured to facilitate communication between the processing device and the additional receiver and the additional transmitter, wherein the retaining device comprises at least one additional slot for retaining the additional receiver, wherein the additional receiver and the additional transmitter are configured to receive an additional unique identifier code, wherein the additional transmitter is configured to generate a signal, wherein the additional receiver is configured to receive the signal generated by the additional transmitter, and wherein the additional receiver is configured to generate an alert signal when a distance between the additional transmitter and the additional receiver is greater than an additional predetermined value. 22. The system of claim 20, wherein the receiver and the additional receiver are disposed within the at least one slot and the at least one additional slot of the retaining element. 23. The system of claim 13, wherein the retaining element comprises at least one button corresponding to the receiver, wherein the button is configured the predetermined value upon actuation of the at least one button. 24. The system of claim 13, wherein a surface of the transmitter comprises an adhesive to enable securing of the transmitter to the object. 25. The system of claim 13, wherein the retaining element is a wrist band. 26. The system of claim 22, wherein the button is disposed within a recessed channel of the retaining element. 27. The system of claim 25, further comprising an actuator element for actuating the button by inserting the actuator element into the recessed channel. 28. The system of claim 13, wherein the signal generated by the transmitter includes global positioning information.
An apparatus and method for tracking an object includes a transmitter that generates a signal and a receiver that receives the signal generated by the transmitter. The receiver generates an alert signal when a distance between the transmitter and receiver is greater than a predetermined value. The receiver is retained in a retaining element, such as a wrist band, while the transmitter is secured to an object. The receiver is programmable to generate the alert signal only when the predetermined distance has been exceeded. The retaining element optionally includes multiple retaining features so that multiple receivers may be retained in a single retaining element. Each receiver is programmable to communicate only with one corresponding transmitter by assigning each transmitter/receiver pair with a unique identification code. The unique identification code expires after use so that each transmitter may only communicate with a single receiver.1. A method for tracking an object, comprising: automatically generating a unique identification code; assigning the unique identification code to a transmitter and to a receiver; enabling receipt by the receiver of a signal generated by the transmitter; and generating an alert signal when a distance between the transmitter and the receiver is greater than a predetermined value. 2. The method of claim 1, wherein the signal generated by the transmitter has a variable signal strength associated with the distance between the transmitter and the receiver, wherein the predetermined value is associated with a corresponding signal strength, and wherein the alert signal is generated when the variable signal strength falls below the corresponding signal strength for the predetermined value. 3. A method of claim 1, further comprising manually generating a user input code and assigning the user input code to the transmitter and the receiver. 4. The method of claim 1, wherein assigning the unique identification code to the transmitter and to the receiver comprises coupling the transmitter and the receiver with a processing device. 5. The method of claim 1, further comprising removing the unique identification code from a list of available identification codes after assigning the unique identification code to the transmitter and the receiver. 6. The method claim 1, further comprising restricting the receiver to communicate only with the transmitter having the unique identifier. 7. The method of claim 1, further comprising setting the predetermined value to a number in the range of 1 to 5 feet. 8. The method of claim 1, further comprising setting the predetermined value to a number in the range of 100 to 200 feet. 9. The method of claim 1, further comprising: automatically generating an additional unique identification code; assigning the additional unique identification code to an additional transmitter and to an additional receiver; enabling receipt by the additional receiver of a signal generated by the additional transmitter; and generating an alert signal when a distance between the additional transmitter and the additional receiver is greater than an additional predetermined value. 10. The method of claim 9, further comprising retaining the first and second receivers in a retaining element, the retaining element having multiple receiver retaining slots. 11. The method of claim 10, further comprising altering at least one of the predetermined value and the additional predetermined value by actuating a button corresponding to the receiver or the additional receiver. 12. The method of claim 1, wherein the signal generated by the transmitter includes global positioning information. 13. A system for tracking an object, comprising: a receiver capable of receiving programming instructions from a processing device; a transmitter capable of receiving programming instructions from the processing device and capable of being secured to the object; a selectable coupling configured to facilitate communication between the processing device and the receiver and transmitter; and a retaining device having a least one slot for retaining the receiver; wherein the receiver and the transmitter are configured to receive a unique identifier code; wherein the transmitter is configured to generate a signal; wherein the receiver configured to receive the signal generated by the transmitter; and wherein the receiver is configured to generate an alert signal when a distance between the transmitter and the receiver is greater than a predetermined value. 14. The system of claim 13, wherein the signal generated by the transmitter has a variable signal strength associated with the distance between the transmitter and the receiver, wherein the predetermined value is associated with a corresponding signal strength, and wherein the alert signal is generated when the variable signal strength falls below the corresponding signal strength for the predetermined value. 15. A system of claim 13, wherein the receiver and transmitter is capable of receiving a user input code. 16. The system of claim 13, wherein the unique identification code is transmitted to the transmitter and to the receiver via the selectable coupling. 17. The system of claim 13, wherein the processing device is configured to remove the unique identification code from a list of available identification codes after assigning the unique identification code to the transmitter and the receiver. 18. The system claim 13, wherein the receiver is configured to communicate only with the transmitter having the unique identifier. 19. The system of claim 13, wherein the predetermined value to a number in the range of 1 to 5 feet. 20. The system of claim 13, wherein the predetermined value to a number in the range of 100 to 200 feet. 21. The system of claim 13, further comprising: an additional receiver capable of receiving programming instructions from the processing device; and an additional transmitter capable of receiving programming instructions from the processing device and capable of being secured to the object, wherein the selectable coupling is configured to facilitate communication between the processing device and the additional receiver and the additional transmitter, wherein the retaining device comprises at least one additional slot for retaining the additional receiver, wherein the additional receiver and the additional transmitter are configured to receive an additional unique identifier code, wherein the additional transmitter is configured to generate a signal, wherein the additional receiver is configured to receive the signal generated by the additional transmitter, and wherein the additional receiver is configured to generate an alert signal when a distance between the additional transmitter and the additional receiver is greater than an additional predetermined value. 22. The system of claim 20, wherein the receiver and the additional receiver are disposed within the at least one slot and the at least one additional slot of the retaining element. 23. The system of claim 13, wherein the retaining element comprises at least one button corresponding to the receiver, wherein the button is configured the predetermined value upon actuation of the at least one button. 24. The system of claim 13, wherein a surface of the transmitter comprises an adhesive to enable securing of the transmitter to the object. 25. The system of claim 13, wherein the retaining element is a wrist band. 26. The system of claim 22, wherein the button is disposed within a recessed channel of the retaining element. 27. The system of claim 25, further comprising an actuator element for actuating the button by inserting the actuator element into the recessed channel. 28. The system of claim 13, wherein the signal generated by the transmitter includes global positioning information.
2,600
10,212
10,212
12,693,881
2,652
A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.
1. A device for obtaining, storing and displaying digital video data carried over a wireless network from a first remote information server that is identified by a Uniform Resource Locator (URL) in the Internet, said device comprising: an antenna for transmitting and receiving digital data over the air; a wireless transceiver coupled to said antenna for bi-directional packet-based digital data communication over the air via said antenna; a non-volatile memory coupled to said wireless transceiver for storing digital video data received by said wireless transceiver from the wireless network; a first memory for storing a web-site URL; a video display component coupled to said non-volatile memory for displaying an image based on the digital video data stored in said non-volatile memory; and a single enclosure housing said antenna, said wireless transceiver, said non-volatile memory, said first memory and said video display component, wherein: said device is addressable in the Internet; and said device is operative for automatically and periodically communicating with the first remote information server at all times when said device is in operation for receiving digital video data from the first remote information server, and for storing and displaying the received digital video data. 2. The device according to claim 1, wherein said single enclosure has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 3. The device according to claim 1, wherein said wireless network is a cellular network, said antenna is a wireless cellular antenna, and said wireless transceiver is a cellular wireless transceiver. 4. The device according to claim 1, wherein said device is addressable using a digital address, and is further operative to send the digital address and a request for digital video data to said first remote information server, and to obtain and display digital video data received from the first remote information server in response to the sent request for information. 5. The device according to claim 1, wherein said device is configured for wall mounting in a residential building, and the first remote information server is located outside the residential building. 6. The device according to claim 1, wherein: said wireless network is a Wireless Local Area Network (WLAN); said antenna is a WLAN antenna; and said wireless transceiver is a WLAN transceiver. 7. The device according to claim 6, wherein said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 8. The device according to claim 1, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said antenna and said display component. 9. The device according to claim 8, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 10. The device according to claim 1, wherein communication with the first remote information server is based on Internet Protocol (IP) suite. 11. The device according to claim 10, wherein communication with the first remote information server is based on TCP/IP. 12. The device according to claim 1, wherein the wireless communication is based on spread spectrum modulation. 13. The device according to claim 12, wherein the spread spectrum modulation is a DSSS (Direct Sequence Spread Spectrum) modulation. 14. The device according to claim 1, wherein the wireless communication uses a license-free radio frequency band. 15. The device according to claim 14, wherein the license-free radio frequency hand is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 16. The device according to claim 1, wherein said device is dedicated only for obtaining, storing and displaying information from the first remote information server. 17. The device according to claim 1, wherein the device address is either a MAC address or an IP address. 18. The device according to claim 1, wherein said device is further operative to store and play digital audio data. 19. The device according to claim 1, wherein said single enclose in constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 20. The device according to claim 1, further comprising a digital to analog converter coupled to said non-volatile memory for converting digital data stored in said non-volatile memory to an analog signal. 21. The device according to claim 20, wherein the analog signal in an analog video signal for connecting to an analog video display. 22. The device according to claim 21, wherein the analog video signal is an S-Video signal, or a composite video signal in a PAL or NTSC format. 23. A television set for receiving and displaying an analog video channel carried over a coaxial cable and digital video data from a wireless network, said television set comprising: a first connector for connecting to the coaxial cable; a flat-screen video display component for visually presenting information, said video display component being coupled to said first connector for receiving and displaying an analog video channel received from said coaxial cable; an antenna for transmitting and receiving digital data over the air; a wireless transceiver coupled to said antenna for bi-directional packet-based digital data communication over the air via said antenna; firmware and a processor for executing said firmware, said processor being coupled to control at least said wireless transceiver and said flat-screen video display component; and a single wall-mountable enclosure housing said first and second connectors, said flat-screen video display component, said transceiver and said processor, said single enclosure having dimensions and an appearance of a conventional flat, wall-mountable framed picture, wherein said processor is coupled between said wireless transceiver and said flat-screen video display component for displaying the digital video data received from said wireless transceiver. 24. The television set according to claim 23, wherein said flat-screen video display component is based on Liquid Crystal Display (LCD) technology. 25. The television set according to claim 23, wherein said television set further comprises a non-volatile memory and is addressable in a digital data network, and said television set is operative for communicating via the wireless network with a first remote information server via the Internet for receiving information from the first remote information server, and for storing the received information in said non-volatile memory for displaying the received information. 26. The television set according to claim 25, further operative for automatically and periodically communicating with the first remote information server at all times when said television set is in operation. 27. The television set according to claim 25, wherein said non-volatile memory comprises a Flash memory. 28. The television set according to claim 25, wherein said firmware include at least part of a web client for communication with, and accessing information stored in, the first remote information server. 29. The television set according to claim 28, wherein said at least part of a web client includes at least part of a graphical web browser. 30. The television set according to claim 29, wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 31. The television set according to claim 25, wherein the first remote server information is organized as a web site having a Uniform Resource Locator (URL) and including web pages as part of the World Wide Web (WWW), and is further identified by said television set using the web site URL. 32. The television set according to claim 25, wherein communication with the first remote information server is based on Internet protocol suite. 33. The television set according to claim 32, wherein communication with the first remote information server is based on TCP/IP. 34. The television set according to claim 25, further operative to initiate a communication with the first remote information server after a set period following a prior communication session. 35. The television set according to claim 25, further operative to communicate with a second remote information server, different from the first remote information server, via the Internet, if communication with the first remote information server cannot be properly executed within a selected time period or after a set delay. 36. The television set according to claim 25, wherein said television set has a digital address and is further operative to send the digital address and a request for information to the first remote information server, and to obtain and display information received from the first remote information server in response to the sent request for information. 37. The television set according to claim 25, further comprising a second non-volatile memory for storing a digital address uniquely identifying said television set in a WAN, and in a Local Area Network (LAN), or on the Internet. 38. The television set according to claim 37, wherein the digital address is either a MAC address or an IP address. 39. The television set according to claim 23, wherein: the wireless network is a Wireless Local Area Network (WLAN); said antenna is a WLAN antenna; and said wireless transceiver is a WLAN transceiver. 40. The television set according to claim 39, wherein said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 41. The television set according to claim 23, wherein communication over the wireless network is based on spread spectrum modulation. 42. The television set according to claim 41, wherein the spread spectrum modulation is a DSSS (Direct Sequence Spread Spectrum) modulation. 43. The television set according to claim 23, wherein communication over the wireless network uses a license-free radio frequency band. 44. The television set according to claim 43, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 45. The television set according to claim 23, wherein: the wireless network is a cellular network; said antenna is a wireless cellular antenna; and said wireless transceiver is a cellular wireless transceiver.
A device for obtaining, storing and displaying information from a remote server, the device has a modem for establishing communication sessions with the remote server. A memory coupled to the modem stores the obtained information, and a display is coupled to the memory for displaying the stored information. The device automatically and periodically communicates with the remote server for obtaining the information.1. A device for obtaining, storing and displaying digital video data carried over a wireless network from a first remote information server that is identified by a Uniform Resource Locator (URL) in the Internet, said device comprising: an antenna for transmitting and receiving digital data over the air; a wireless transceiver coupled to said antenna for bi-directional packet-based digital data communication over the air via said antenna; a non-volatile memory coupled to said wireless transceiver for storing digital video data received by said wireless transceiver from the wireless network; a first memory for storing a web-site URL; a video display component coupled to said non-volatile memory for displaying an image based on the digital video data stored in said non-volatile memory; and a single enclosure housing said antenna, said wireless transceiver, said non-volatile memory, said first memory and said video display component, wherein: said device is addressable in the Internet; and said device is operative for automatically and periodically communicating with the first remote information server at all times when said device is in operation for receiving digital video data from the first remote information server, and for storing and displaying the received digital video data. 2. The device according to claim 1, wherein said single enclosure has dimensions and an appearance of a conventional flat, wall-mountable framed picture. 3. The device according to claim 1, wherein said wireless network is a cellular network, said antenna is a wireless cellular antenna, and said wireless transceiver is a cellular wireless transceiver. 4. The device according to claim 1, wherein said device is addressable using a digital address, and is further operative to send the digital address and a request for digital video data to said first remote information server, and to obtain and display digital video data received from the first remote information server in response to the sent request for information. 5. The device according to claim 1, wherein said device is configured for wall mounting in a residential building, and the first remote information server is located outside the residential building. 6. The device according to claim 1, wherein: said wireless network is a Wireless Local Area Network (WLAN); said antenna is a WLAN antenna; and said wireless transceiver is a WLAN transceiver. 7. The device according to claim 6, wherein said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 8. The device according to claim 1, further comprising firmware and a processor for executing said firmware, said processor being coupled to control at least said antenna and said display component. 9. The device according to claim 8, wherein said processor is one of: a microprocessor; and a microcomputer, and said device further comprises at least one user operated button or switch coupled to said processor, for user control of operation of said device. 10. The device according to claim 1, wherein communication with the first remote information server is based on Internet Protocol (IP) suite. 11. The device according to claim 10, wherein communication with the first remote information server is based on TCP/IP. 12. The device according to claim 1, wherein the wireless communication is based on spread spectrum modulation. 13. The device according to claim 12, wherein the spread spectrum modulation is a DSSS (Direct Sequence Spread Spectrum) modulation. 14. The device according to claim 1, wherein the wireless communication uses a license-free radio frequency band. 15. The device according to claim 14, wherein the license-free radio frequency hand is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 16. The device according to claim 1, wherein said device is dedicated only for obtaining, storing and displaying information from the first remote information server. 17. The device according to claim 1, wherein the device address is either a MAC address or an IP address. 18. The device according to claim 1, wherein said device is further operative to store and play digital audio data. 19. The device according to claim 1, wherein said single enclose in constructed to have at least one of the following: a form substantially similar to that of a standard picture frame; wall mounting elements substantially similar to those of a standard picture frame for hanging on a wall; and a shape to at least in part substitute for a standard picture frame. 20. The device according to claim 1, further comprising a digital to analog converter coupled to said non-volatile memory for converting digital data stored in said non-volatile memory to an analog signal. 21. The device according to claim 20, wherein the analog signal in an analog video signal for connecting to an analog video display. 22. The device according to claim 21, wherein the analog video signal is an S-Video signal, or a composite video signal in a PAL or NTSC format. 23. A television set for receiving and displaying an analog video channel carried over a coaxial cable and digital video data from a wireless network, said television set comprising: a first connector for connecting to the coaxial cable; a flat-screen video display component for visually presenting information, said video display component being coupled to said first connector for receiving and displaying an analog video channel received from said coaxial cable; an antenna for transmitting and receiving digital data over the air; a wireless transceiver coupled to said antenna for bi-directional packet-based digital data communication over the air via said antenna; firmware and a processor for executing said firmware, said processor being coupled to control at least said wireless transceiver and said flat-screen video display component; and a single wall-mountable enclosure housing said first and second connectors, said flat-screen video display component, said transceiver and said processor, said single enclosure having dimensions and an appearance of a conventional flat, wall-mountable framed picture, wherein said processor is coupled between said wireless transceiver and said flat-screen video display component for displaying the digital video data received from said wireless transceiver. 24. The television set according to claim 23, wherein said flat-screen video display component is based on Liquid Crystal Display (LCD) technology. 25. The television set according to claim 23, wherein said television set further comprises a non-volatile memory and is addressable in a digital data network, and said television set is operative for communicating via the wireless network with a first remote information server via the Internet for receiving information from the first remote information server, and for storing the received information in said non-volatile memory for displaying the received information. 26. The television set according to claim 25, further operative for automatically and periodically communicating with the first remote information server at all times when said television set is in operation. 27. The television set according to claim 25, wherein said non-volatile memory comprises a Flash memory. 28. The television set according to claim 25, wherein said firmware include at least part of a web client for communication with, and accessing information stored in, the first remote information server. 29. The television set according to claim 28, wherein said at least part of a web client includes at least part of a graphical web browser. 30. The television set according to claim 29, wherein said at least part of a graphical web browser is based on Windows Internet Explorer. 31. The television set according to claim 25, wherein the first remote server information is organized as a web site having a Uniform Resource Locator (URL) and including web pages as part of the World Wide Web (WWW), and is further identified by said television set using the web site URL. 32. The television set according to claim 25, wherein communication with the first remote information server is based on Internet protocol suite. 33. The television set according to claim 32, wherein communication with the first remote information server is based on TCP/IP. 34. The television set according to claim 25, further operative to initiate a communication with the first remote information server after a set period following a prior communication session. 35. The television set according to claim 25, further operative to communicate with a second remote information server, different from the first remote information server, via the Internet, if communication with the first remote information server cannot be properly executed within a selected time period or after a set delay. 36. The television set according to claim 25, wherein said television set has a digital address and is further operative to send the digital address and a request for information to the first remote information server, and to obtain and display information received from the first remote information server in response to the sent request for information. 37. The television set according to claim 25, further comprising a second non-volatile memory for storing a digital address uniquely identifying said television set in a WAN, and in a Local Area Network (LAN), or on the Internet. 38. The television set according to claim 37, wherein the digital address is either a MAC address or an IP address. 39. The television set according to claim 23, wherein: the wireless network is a Wireless Local Area Network (WLAN); said antenna is a WLAN antenna; and said wireless transceiver is a WLAN transceiver. 40. The television set according to claim 39, wherein said WLAN transceiver is operative to communicate substantially according to IEEE802.11 standard. 41. The television set according to claim 23, wherein communication over the wireless network is based on spread spectrum modulation. 42. The television set according to claim 41, wherein the spread spectrum modulation is a DSSS (Direct Sequence Spread Spectrum) modulation. 43. The television set according to claim 23, wherein communication over the wireless network uses a license-free radio frequency band. 44. The television set according to claim 43, wherein the license-free radio frequency band is one of: 900 MHz; 2.4 GHz; and 5.8 GHz. 45. The television set according to claim 23, wherein: the wireless network is a cellular network; said antenna is a wireless cellular antenna; and said wireless transceiver is a cellular wireless transceiver.
2,600
10,213
10,213
15,668,125
2,683
The embodiments of the present invention enable novel methods, non-transitory mediums, and systems for encoding and generating haptic effects. According to the various embodiments, a media object is retrieved. The media object is analyzed to determine one or more time periods for rendering haptic effects. The haptic effects for rendering during the time periods are determined. The haptic effects are encoded as a haptic effect pattern that identifies a start time and duration for each of the haptic effects.
1. A method for generating haptic effects, the method comprising: retrieving a media object; generating a haptic stream for storing the haptic effects, the haptic stream corresponding to the media object; analyzing the media object to determine one or more time periods for rendering the haptic effects; determining the haptic effects for rendering during the time periods; encoding the haptic effects of the haptic stream as a haptic effect pattern that identifies a start time and a duration of each of the haptic effects; engaging a processor to render the haptic effects of the haptic stream according to the haptic effect pattern; and disengaging the processor from processing the haptic stream according to the haptic effect pattern. 2. The method according to claim 1, wherein the haptic effect pattern includes one or more parameters for each of the haptic effects, the parameters including magnitude, frequency, and/or type of haptic effect. 3. The method according to claim 1, wherein the media object includes an audio object or a video object. 4. The method according to claim 1, wherein the haptic effect pattern only includes data relating to haptically-active time periods. 5. The method according to claim 1, wherein the haptic effect pattern includes a plurality of duration times that indicate alternating actuator OFF and actuator ON time periods. 6. The method according to claim 1, wherein the media object and the haptic effect pattern are synchronized. 7. The method according to claim 1, further comprising: adjusting an elapsed time variable of the haptic effect pattern in connection with execution of one of the following functions: pause, resume, and seek. 8. A device comprising: a processor; and a memory storing one or more programs for execution by the processor, the one or more programs including instructions for: retrieving a media object; generating a haptic stream for storing haptic effects, the haptic stream corresponding to the media object; analyzing the media object to determine one or more time periods for rendering the haptic effects; determining the haptic effects for rendering during the time periods; encoding the haptic effects of the haptic stream as a haptic effect pattern that identifies a start time and a duration of each of the haptic effects; engaging the processor to render the haptic effects of the haptic stream according to the haptic effect pattern; and disengaging the processor from processing the haptic stream according to the haptic effect pattern. 9. The device according to claim 8, wherein the haptic effect pattern includes one or more parameters for each of the haptic effects, the parameters including magnitude, frequency, and/or type of haptic effect. 10. The device according to claim 8, wherein the media object includes an audio object or a video object. 11. The device according to claim 8, wherein the haptic effect pattern only includes data relating to haptically-active time periods. 12. The device according to claim 8, wherein the haptic effect pattern includes a plurality of duration times that indicate alternating actuator OFF and actuator ON time periods. 13. The device according to claim 8, wherein the media object and the haptic effect pattern are synchronized. 14. The device according to claim 8, further comprising instructions for: adjusting an elapsed time variable of the haptic effect pattern in connection with execution of one of the following functions: pause, resume, and seek. 15. A non-transitory computer readable storage medium storing one or more programs configured to be executed by a processor, the one or more programs comprising instructions for: retrieving a media object; generating a haptic stream for storing haptic effects, the haptic stream corresponding to the media object; analyzing the media object to determine one or more time periods for rendering the haptic effects; determining the haptic effects for rendering during the time periods; encoding the haptic effects of the haptic stream as a haptic effect pattern that identifies a start time and a duration of each of the haptic effects; engaging a processor to render the haptic effects of the haptic stream according to the haptic effect pattern; and disengaging the processor from processing the haptic stream according to the haptic effect pattern. 16. The non-transitory computer readable storage medium according to claim 15, wherein the haptic effect pattern includes one or more parameters for each of the haptic effects, the parameters including magnitude, frequency, and/or type of haptic effect. 17. The non-transitory computer readable storage medium according to claim 15, wherein the media object includes an audio object or a video object. 18. The non-transitory computer readable storage medium according to claim 15, wherein the haptic effect pattern only includes data relating to haptically-active time periods. 19. The non-transitory computer readable storage medium according to claim 15, wherein the haptic effect pattern includes a plurality of duration times that indicate alternating actuator OFF and actuator ON time periods. 20. The non-transitory computer readable storage medium according to claim 15, wherein the media object and the haptic effect pattern are synchronized.
The embodiments of the present invention enable novel methods, non-transitory mediums, and systems for encoding and generating haptic effects. According to the various embodiments, a media object is retrieved. The media object is analyzed to determine one or more time periods for rendering haptic effects. The haptic effects for rendering during the time periods are determined. The haptic effects are encoded as a haptic effect pattern that identifies a start time and duration for each of the haptic effects.1. A method for generating haptic effects, the method comprising: retrieving a media object; generating a haptic stream for storing the haptic effects, the haptic stream corresponding to the media object; analyzing the media object to determine one or more time periods for rendering the haptic effects; determining the haptic effects for rendering during the time periods; encoding the haptic effects of the haptic stream as a haptic effect pattern that identifies a start time and a duration of each of the haptic effects; engaging a processor to render the haptic effects of the haptic stream according to the haptic effect pattern; and disengaging the processor from processing the haptic stream according to the haptic effect pattern. 2. The method according to claim 1, wherein the haptic effect pattern includes one or more parameters for each of the haptic effects, the parameters including magnitude, frequency, and/or type of haptic effect. 3. The method according to claim 1, wherein the media object includes an audio object or a video object. 4. The method according to claim 1, wherein the haptic effect pattern only includes data relating to haptically-active time periods. 5. The method according to claim 1, wherein the haptic effect pattern includes a plurality of duration times that indicate alternating actuator OFF and actuator ON time periods. 6. The method according to claim 1, wherein the media object and the haptic effect pattern are synchronized. 7. The method according to claim 1, further comprising: adjusting an elapsed time variable of the haptic effect pattern in connection with execution of one of the following functions: pause, resume, and seek. 8. A device comprising: a processor; and a memory storing one or more programs for execution by the processor, the one or more programs including instructions for: retrieving a media object; generating a haptic stream for storing haptic effects, the haptic stream corresponding to the media object; analyzing the media object to determine one or more time periods for rendering the haptic effects; determining the haptic effects for rendering during the time periods; encoding the haptic effects of the haptic stream as a haptic effect pattern that identifies a start time and a duration of each of the haptic effects; engaging the processor to render the haptic effects of the haptic stream according to the haptic effect pattern; and disengaging the processor from processing the haptic stream according to the haptic effect pattern. 9. The device according to claim 8, wherein the haptic effect pattern includes one or more parameters for each of the haptic effects, the parameters including magnitude, frequency, and/or type of haptic effect. 10. The device according to claim 8, wherein the media object includes an audio object or a video object. 11. The device according to claim 8, wherein the haptic effect pattern only includes data relating to haptically-active time periods. 12. The device according to claim 8, wherein the haptic effect pattern includes a plurality of duration times that indicate alternating actuator OFF and actuator ON time periods. 13. The device according to claim 8, wherein the media object and the haptic effect pattern are synchronized. 14. The device according to claim 8, further comprising instructions for: adjusting an elapsed time variable of the haptic effect pattern in connection with execution of one of the following functions: pause, resume, and seek. 15. A non-transitory computer readable storage medium storing one or more programs configured to be executed by a processor, the one or more programs comprising instructions for: retrieving a media object; generating a haptic stream for storing haptic effects, the haptic stream corresponding to the media object; analyzing the media object to determine one or more time periods for rendering the haptic effects; determining the haptic effects for rendering during the time periods; encoding the haptic effects of the haptic stream as a haptic effect pattern that identifies a start time and a duration of each of the haptic effects; engaging a processor to render the haptic effects of the haptic stream according to the haptic effect pattern; and disengaging the processor from processing the haptic stream according to the haptic effect pattern. 16. The non-transitory computer readable storage medium according to claim 15, wherein the haptic effect pattern includes one or more parameters for each of the haptic effects, the parameters including magnitude, frequency, and/or type of haptic effect. 17. The non-transitory computer readable storage medium according to claim 15, wherein the media object includes an audio object or a video object. 18. The non-transitory computer readable storage medium according to claim 15, wherein the haptic effect pattern only includes data relating to haptically-active time periods. 19. The non-transitory computer readable storage medium according to claim 15, wherein the haptic effect pattern includes a plurality of duration times that indicate alternating actuator OFF and actuator ON time periods. 20. The non-transitory computer readable storage medium according to claim 15, wherein the media object and the haptic effect pattern are synchronized.
2,600
10,214
10,214
15,350,209
2,646
The present disclosure provides methods operable in a balloon network. The method can include determining that a balloon is at a location associated with a legally-defined geographic area. An area profile of the legally-defined geographic area may identify geographically-restricted data that must not be removed from the legally-defined geographic area. The method can also include determining that the balloon contains at least some of the geographically-restricted data. The method can also include determining that the balloon is likely to move out of the legally-defined geographic area. The method can also include removing the geographically-restricted data from the memory of the balloon.
1. A computer-implemented method comprising: determining that a first balloon is at a location associated with a legally-defined geographic area, wherein a balloon network provides service in a plurality of legally-defined geographic areas, and wherein an area profile identifies geographically-restricted data that must not be removed from the legally-defined geographic area; determining that the first balloon contains at least some of the geographically-restricted data; determining that the first balloon is likely to move out of the legally-defined geographic area; and responsively removing the geographically-restricted data from the memory of the first balloon prior to the first balloon leaving the legally-defined geographic area. 2. The method of claim 1, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: responsive to the determination that the first balloon is likely to move out of the legally-defined geographic area, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 3. The method of claim 2, wherein the device receiving the geographically-restricted data from the first balloon is a second balloon located in the legally-defined geographic area. 4. The method of claim 2, wherein the device receiving the geographically-restricted data from the first balloon is a ground-based station located in the legally-defined geographic area. 5. The method of claim 2, wherein the geographically-restricted data is transferred via a RF air-interface. 6. The method of claim 2, wherein the geographically-restricted data is transferred via a free-space optical link. 7. The method of claim 1, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: determining if a second balloon located in the legally-defined geographic area contains the geographically-restricted data; responsive to the determination that the second balloon does not include the geographically-restricted data, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 8. The method of claim 1, wherein determining the first balloon is likely to move out of the legally-defined geographic area comprises: determining a direction in which the first balloon is travelling; based on the direction in which the first balloon is travelling, determining a probability that the first balloon will move out of the legally-defined geographic area; and determining that the probability that the first balloon will move out of the legally-defined geographic area is greater than a threshold probability. 9. A method operable by a first balloon in a balloon network, the method comprising: receiving at least a portion of geographically-restricted data when the first balloon enters a legally-defined geographic area; determining that the legally-defined geographic area has an area profile that identifies geographically-restricted data that must not be removed from the legally-defined geographic area; determining that the first balloon is likely to move out of the legally-defined geographic area; and responsively removing the geographically-restricted data from the memory of the first balloon prior to the first balloon leaving the legally-defined geographic area. 10. The method of claim 9, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: responsive to the determination that the first balloon is likely to move out of the legally-defined geographic area, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 11. The method of claim 9, wherein the device receiving the geographically-restricted data from the first balloon is a second balloon located in the legally-defined geographic area. 12. The method of claim 9, wherein the device receiving the geographically-restricted data from the first balloon is a ground-based station located in the legally-defined geographic area. 13. The method of claim 9, wherein the geographically-restricted data is transferred via a RF air-interface. 14. The method of claim 9, wherein the geographically-restricted data is transferred via a free-space optical link. 15. The method of claim 9, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: determining if a second balloon located in the legally-defined geographic area contains the geographically-restricted data; responsive to the determination that the second balloon does not include the geographically-restricted data, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 16. The method of claim 9, wherein determining the first balloon is likely to move out of the legally-defined geographic area comprises: determining a direction in which the first balloon is travelling; based on the direction in which the first balloon is travelling, determining a probability that the first balloon will move out of the legally-defined geographic area; and determining that the probability that the first balloon will move out of the legally-defined geographic area is greater than a threshold probability. 17. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors in a computing device, cause that computing device to perform functions, the functions comprising: determining that a first balloon is at a location associated with a legally-defined geographic area, wherein a balloon network provides service in a plurality of legally-defined geographic areas, and wherein an area profile identifies geographically-restricted data that must not be removed from the legally-defined geographic area; determining that the first balloon contains at least some of the geographically-restricted data; determining that the first balloon is likely to move out of the legally-defined geographic area; and responsively removing the geographically-restricted data from the memory of the first balloon prior to the first balloon leaving the legally-defined geographic area. 18. The non-transitory computer-readable medium of claim 17, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the functions further comprising: responsive to the determination that the first balloon is likely to move out of the legally-defined geographic area, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 19. The non-transitory computer-readable medium of claim 17, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the functions further comprising: determining if a second balloon located in the legally-defined geographic area contains the geographically-restricted data; responsive to the determination that the second balloon does not contain the geographically-restricted data, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 20. The non-transitory computer-readable medium of 17, wherein determining the first balloon is likely to move out of the legally-defined geographic area comprises the functions of: determining a direction in which the first balloon is travelling; based on the direction in which the first balloon is travelling, determining a probability that the first balloon will move out of the legally-defined geographic area; and determining that the probability that the first balloon will move out of the legally-defined geographic area is greater than a threshold probability.
The present disclosure provides methods operable in a balloon network. The method can include determining that a balloon is at a location associated with a legally-defined geographic area. An area profile of the legally-defined geographic area may identify geographically-restricted data that must not be removed from the legally-defined geographic area. The method can also include determining that the balloon contains at least some of the geographically-restricted data. The method can also include determining that the balloon is likely to move out of the legally-defined geographic area. The method can also include removing the geographically-restricted data from the memory of the balloon.1. A computer-implemented method comprising: determining that a first balloon is at a location associated with a legally-defined geographic area, wherein a balloon network provides service in a plurality of legally-defined geographic areas, and wherein an area profile identifies geographically-restricted data that must not be removed from the legally-defined geographic area; determining that the first balloon contains at least some of the geographically-restricted data; determining that the first balloon is likely to move out of the legally-defined geographic area; and responsively removing the geographically-restricted data from the memory of the first balloon prior to the first balloon leaving the legally-defined geographic area. 2. The method of claim 1, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: responsive to the determination that the first balloon is likely to move out of the legally-defined geographic area, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 3. The method of claim 2, wherein the device receiving the geographically-restricted data from the first balloon is a second balloon located in the legally-defined geographic area. 4. The method of claim 2, wherein the device receiving the geographically-restricted data from the first balloon is a ground-based station located in the legally-defined geographic area. 5. The method of claim 2, wherein the geographically-restricted data is transferred via a RF air-interface. 6. The method of claim 2, wherein the geographically-restricted data is transferred via a free-space optical link. 7. The method of claim 1, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: determining if a second balloon located in the legally-defined geographic area contains the geographically-restricted data; responsive to the determination that the second balloon does not include the geographically-restricted data, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 8. The method of claim 1, wherein determining the first balloon is likely to move out of the legally-defined geographic area comprises: determining a direction in which the first balloon is travelling; based on the direction in which the first balloon is travelling, determining a probability that the first balloon will move out of the legally-defined geographic area; and determining that the probability that the first balloon will move out of the legally-defined geographic area is greater than a threshold probability. 9. A method operable by a first balloon in a balloon network, the method comprising: receiving at least a portion of geographically-restricted data when the first balloon enters a legally-defined geographic area; determining that the legally-defined geographic area has an area profile that identifies geographically-restricted data that must not be removed from the legally-defined geographic area; determining that the first balloon is likely to move out of the legally-defined geographic area; and responsively removing the geographically-restricted data from the memory of the first balloon prior to the first balloon leaving the legally-defined geographic area. 10. The method of claim 9, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: responsive to the determination that the first balloon is likely to move out of the legally-defined geographic area, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 11. The method of claim 9, wherein the device receiving the geographically-restricted data from the first balloon is a second balloon located in the legally-defined geographic area. 12. The method of claim 9, wherein the device receiving the geographically-restricted data from the first balloon is a ground-based station located in the legally-defined geographic area. 13. The method of claim 9, wherein the geographically-restricted data is transferred via a RF air-interface. 14. The method of claim 9, wherein the geographically-restricted data is transferred via a free-space optical link. 15. The method of claim 9, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the method further comprising: determining if a second balloon located in the legally-defined geographic area contains the geographically-restricted data; responsive to the determination that the second balloon does not include the geographically-restricted data, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 16. The method of claim 9, wherein determining the first balloon is likely to move out of the legally-defined geographic area comprises: determining a direction in which the first balloon is travelling; based on the direction in which the first balloon is travelling, determining a probability that the first balloon will move out of the legally-defined geographic area; and determining that the probability that the first balloon will move out of the legally-defined geographic area is greater than a threshold probability. 17. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors in a computing device, cause that computing device to perform functions, the functions comprising: determining that a first balloon is at a location associated with a legally-defined geographic area, wherein a balloon network provides service in a plurality of legally-defined geographic areas, and wherein an area profile identifies geographically-restricted data that must not be removed from the legally-defined geographic area; determining that the first balloon contains at least some of the geographically-restricted data; determining that the first balloon is likely to move out of the legally-defined geographic area; and responsively removing the geographically-restricted data from the memory of the first balloon prior to the first balloon leaving the legally-defined geographic area. 18. The non-transitory computer-readable medium of claim 17, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the functions further comprising: responsive to the determination that the first balloon is likely to move out of the legally-defined geographic area, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 19. The non-transitory computer-readable medium of claim 17, wherein the area profile further requires that the geographically-restricted data be saved in at least one location in the legally-defined geographic area, the functions further comprising: determining if a second balloon located in the legally-defined geographic area contains the geographically-restricted data; responsive to the determination that the second balloon does not contain the geographically-restricted data, transferring the geographically-restricted data from the first balloon to a device that is located in the legally-defined geographic area, before removing the geographically-restricted data from the memory of the first balloon. 20. The non-transitory computer-readable medium of 17, wherein determining the first balloon is likely to move out of the legally-defined geographic area comprises the functions of: determining a direction in which the first balloon is travelling; based on the direction in which the first balloon is travelling, determining a probability that the first balloon will move out of the legally-defined geographic area; and determining that the probability that the first balloon will move out of the legally-defined geographic area is greater than a threshold probability.
2,600
10,215
10,215
15,842,412
2,632
A system for a vehicle includes a LIDAR sensor attachable to the vehicle and a puddle lamp fixed relative to the LIDAR sensor and oriented to project a light projection downward beside the vehicle. A computer may be in communication with the LIDAR sensor and the puddle lamp and programmed to actuate the puddle lamp in response to receiving data generated by the LIDAR sensor indicating a user positioned within a threshold distance of the vehicle.
1. A system for a vehicle, the system comprising: a LIDAR sensor attachable to the vehicle; a puddle lamp fixed relative to the LIDAR sensor and oriented to project a light projection downward beside the vehicle; and a computer in communication with the LIDAR sensor and the puddle lamp and programmed to actuate the puddle lamp in response to receiving data from the LIDAR sensor indicating a human user positioned within a threshold distance of the vehicle. 2. The system of claim 1, wherein the LIDAR sensor is attachable to an A pillar of the vehicle. 3. The system of claim 1, wherein the puddle lamp is attached to the LIDAR sensor. 4. The system of claim 3, wherein the puddle lamp is disposed underneath the LIDAR sensor. 5. (canceled) 6. The system of claim 1, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data from the LIDAR sensor indicating the user positioned within the threshold distance from the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data from the LIDAR sensor indicating that the user is positioned at a designated location relative to the vehicle. 7. The system of claim 6, wherein the first light projection and the second light projection have at least one of different shapes and different colors. 8. The system of claim 1, wherein the computer is programmed to actuate a door of the vehicle to open in response to receiving data from the LIDAR sensor indicating that the user is positioned at a designated location relative to the vehicle. 9. The system of claim 1, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data from the LIDAR sensor indicating the user positioned within the threshold distance from the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data from the LIDAR sensor indicating an obstruction in a designated area relative to the vehicle. 10. A system for a vehicle, the system comprising: a sensor; a puddle lamp fixed relative to the sensor and oriented to project a light projection downward beside the vehicle; and a computer in communication with the sensor and the puddle lamp and programmed to actuate the puddle lamp in response to receiving data generated by the sensor indicating a human user positioned within a threshold distance of the vehicle. 11. The system of claim 10, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data generated by the sensor indicating the user positioned within the threshold distance of the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data generated by the sensor indicating that the user is positioned at a designated location relative to the vehicle. 12. The system of claim 11, wherein the first light projection and the second light projection have at least one of different shapes and different colors. 13. The system of claim 10, wherein the computer is programmed to actuate a door of the vehicle to open in response to receiving data generated by the sensor indicating that the user is positioned at a designated location relative to the vehicle. 14. The system of claim 10, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data generated by the sensor indicating the user positioned within the threshold distance from the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data generated by the sensor indicating an obstruction in a designated area relative to the vehicle. 15. The system of claim 10, further comprising a plurality of Bluetooth Low Energy sensors including the sensor. 16. The system of claim 15, wherein the computer is programmed to triangulate a position of the user based on data generated by the Bluetooth Low Energy sensors. 17. The system of claim 10, further comprising the vehicle including a body, a plurality of doors, the sensor, the puddle lamp, and the computer, wherein the puddle lamp is attached to the body and spaced from the doors. 18. The system of claim 17, wherein the puddle lamp is oriented to project the light projection beside one of the doors.
A system for a vehicle includes a LIDAR sensor attachable to the vehicle and a puddle lamp fixed relative to the LIDAR sensor and oriented to project a light projection downward beside the vehicle. A computer may be in communication with the LIDAR sensor and the puddle lamp and programmed to actuate the puddle lamp in response to receiving data generated by the LIDAR sensor indicating a user positioned within a threshold distance of the vehicle.1. A system for a vehicle, the system comprising: a LIDAR sensor attachable to the vehicle; a puddle lamp fixed relative to the LIDAR sensor and oriented to project a light projection downward beside the vehicle; and a computer in communication with the LIDAR sensor and the puddle lamp and programmed to actuate the puddle lamp in response to receiving data from the LIDAR sensor indicating a human user positioned within a threshold distance of the vehicle. 2. The system of claim 1, wherein the LIDAR sensor is attachable to an A pillar of the vehicle. 3. The system of claim 1, wherein the puddle lamp is attached to the LIDAR sensor. 4. The system of claim 3, wherein the puddle lamp is disposed underneath the LIDAR sensor. 5. (canceled) 6. The system of claim 1, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data from the LIDAR sensor indicating the user positioned within the threshold distance from the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data from the LIDAR sensor indicating that the user is positioned at a designated location relative to the vehicle. 7. The system of claim 6, wherein the first light projection and the second light projection have at least one of different shapes and different colors. 8. The system of claim 1, wherein the computer is programmed to actuate a door of the vehicle to open in response to receiving data from the LIDAR sensor indicating that the user is positioned at a designated location relative to the vehicle. 9. The system of claim 1, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data from the LIDAR sensor indicating the user positioned within the threshold distance from the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data from the LIDAR sensor indicating an obstruction in a designated area relative to the vehicle. 10. A system for a vehicle, the system comprising: a sensor; a puddle lamp fixed relative to the sensor and oriented to project a light projection downward beside the vehicle; and a computer in communication with the sensor and the puddle lamp and programmed to actuate the puddle lamp in response to receiving data generated by the sensor indicating a human user positioned within a threshold distance of the vehicle. 11. The system of claim 10, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data generated by the sensor indicating the user positioned within the threshold distance of the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data generated by the sensor indicating that the user is positioned at a designated location relative to the vehicle. 12. The system of claim 11, wherein the first light projection and the second light projection have at least one of different shapes and different colors. 13. The system of claim 10, wherein the computer is programmed to actuate a door of the vehicle to open in response to receiving data generated by the sensor indicating that the user is positioned at a designated location relative to the vehicle. 14. The system of claim 10, wherein the light projection is a first light projection, the computer is programmed to actuate the puddle lamp to project the first light projection in response to receiving data generated by the sensor indicating the user positioned within the threshold distance from the vehicle, and then actuate the puddle lamp to project a second light projection in response to receiving data generated by the sensor indicating an obstruction in a designated area relative to the vehicle. 15. The system of claim 10, further comprising a plurality of Bluetooth Low Energy sensors including the sensor. 16. The system of claim 15, wherein the computer is programmed to triangulate a position of the user based on data generated by the Bluetooth Low Energy sensors. 17. The system of claim 10, further comprising the vehicle including a body, a plurality of doors, the sensor, the puddle lamp, and the computer, wherein the puddle lamp is attached to the body and spaced from the doors. 18. The system of claim 17, wherein the puddle lamp is oriented to project the light projection beside one of the doors.
2,600
10,216
10,216
15,390,891
2,628
A projection system emits light pulses in a field of view and measures properties of reflections. Properties may include time of flight and return amplitude. Foreground objects and background surfaces are distinguished, distances between foreground objects and background surfaces are determined based on reflections that are occluded by the foreground objects and other properties of the projection system.
1. A method comprising: receiving returned illumination data generated by measuring a property of a scanned beam of light reflected off a foreground object and a background surface within a field of view, the returned illumination data including data representing an occlusion region within the field of view in which light reflected off the background surface is occluded from a receiver by the foreground object, and determining a distance between the foreground object and the background surface using an attribute of the occlusion region. 2. The method of claim 1 wherein the attribute of the occlusion region is a width of the occlusion region. 3. The method of claim 1 wherein determining a distance comprises determining a distance between the foreground object and the background surface using an attribute of the occlusion region and a physical characteristic of a system that generated the returned illumination data. 4. The method of claim 3 wherein the physical characteristic comprises an angle between a line formed between a light transmitter and a light receiver and the background surface. 5. The method of claim 3 wherein the physical characteristic comprises a distance between a light transmitter and a light receiver. 6. The method of claim 5 wherein the attribute of the occlusion region is a width of the occlusion region measured in a common dimension with a line formed between the light transmitter and the light receiver. 7. The method of claim 6 further comprising: determining a distance between the foreground object and the background surface at multiple points to produce multiple distance measurements; and averaging the multiple distance measurements. 8. The method of claim 6 further comprising: determining a distance between the foreground object and the background surface for multiple frames of returned illumination data to produce multiple distance measurements; and averaging the multiple distance measurements. 9. A non-transitory computer-readable medium having instructions stored thereon that when accessed result in a processor performing a method comprising: determining a width of an occlusion region adjacent to a foreground object, wherein the occlusion region results from the foreground object occluding from a receiver light reflected off a background surface; determining a distance between the foreground object and the background surface as a function of the width of the occlusion region. 10. The non-transitory computer-readable medium of claim 9 further comprising: determining a distance between the foreground object and the background surface at multiple points to produce multiple distance measurements; and averaging the multiple distance measurements. 11. The non-transitory computer-readable medium of claim 9 further comprising: determining a distance between the foreground object and the background surface for multiple frames of returned illumination data to produce multiple distance measurements; and averaging the multiple distance measurements. 12. The non-transitory computer-readable medium of claim 9 wherein determining a distance comprises determining a distance between the foreground object and the background surface using the width of the occlusion region and a distance between a light transmitter and a light receiver. 13. The non-transitory computer-readable medium of claim 12 wherein determining a distance comprises determining a distance between the foreground object and the background surface using the width of the occlusion region and an angle between a line formed between the light transmitter and light receiver and the background surface. 14. An apparatus comprising: a directional light transmitter to emit light pulses in a field of view; an omni-directional light receiver to receive reflections of the light pulses from a foreground object and a background surface in the field of view and to measure returned illumination data; an occlusion-based height estimation circuit to determine a distance between the foreground object and the background surface based on reflections occluded from the receiver by the foreground object. 15. The apparatus of claim 14 wherein the occlusion-based height estimation circuit determines the distance as a function of a width of an occlusion region formed by the occluded reflections. 16. The apparatus of claim 15 wherein the occlusion-based height estimation circuit further determines the distance as a function of a distance between the light transmitter and the light receiver. 17. The apparatus of claim 16 wherein the width of the occlusion region is measured in a common dimension with a line formed between the light transmitter and the light receiver. 18. The apparatus of claim 15 wherein the occlusion-based height estimation circuit further determines the distance as a function of an angle between a line formed between the light transmitter and light receiver and the background surface. 19. The apparatus of claim 14 wherein the occlusion-based height estimation circuit further determines a distance between the foreground object and the background surface at multiple points to produce multiple distance measurements, and averages the multiple distance measurements. 20. The apparatus of claim 14 wherein the occlusion-based height estimation circuit further determines a distance between the foreground object and the background surface for multiple frames of returned illumination data to produce multiple distance measurements, and averages the multiple distance measurements.
A projection system emits light pulses in a field of view and measures properties of reflections. Properties may include time of flight and return amplitude. Foreground objects and background surfaces are distinguished, distances between foreground objects and background surfaces are determined based on reflections that are occluded by the foreground objects and other properties of the projection system.1. A method comprising: receiving returned illumination data generated by measuring a property of a scanned beam of light reflected off a foreground object and a background surface within a field of view, the returned illumination data including data representing an occlusion region within the field of view in which light reflected off the background surface is occluded from a receiver by the foreground object, and determining a distance between the foreground object and the background surface using an attribute of the occlusion region. 2. The method of claim 1 wherein the attribute of the occlusion region is a width of the occlusion region. 3. The method of claim 1 wherein determining a distance comprises determining a distance between the foreground object and the background surface using an attribute of the occlusion region and a physical characteristic of a system that generated the returned illumination data. 4. The method of claim 3 wherein the physical characteristic comprises an angle between a line formed between a light transmitter and a light receiver and the background surface. 5. The method of claim 3 wherein the physical characteristic comprises a distance between a light transmitter and a light receiver. 6. The method of claim 5 wherein the attribute of the occlusion region is a width of the occlusion region measured in a common dimension with a line formed between the light transmitter and the light receiver. 7. The method of claim 6 further comprising: determining a distance between the foreground object and the background surface at multiple points to produce multiple distance measurements; and averaging the multiple distance measurements. 8. The method of claim 6 further comprising: determining a distance between the foreground object and the background surface for multiple frames of returned illumination data to produce multiple distance measurements; and averaging the multiple distance measurements. 9. A non-transitory computer-readable medium having instructions stored thereon that when accessed result in a processor performing a method comprising: determining a width of an occlusion region adjacent to a foreground object, wherein the occlusion region results from the foreground object occluding from a receiver light reflected off a background surface; determining a distance between the foreground object and the background surface as a function of the width of the occlusion region. 10. The non-transitory computer-readable medium of claim 9 further comprising: determining a distance between the foreground object and the background surface at multiple points to produce multiple distance measurements; and averaging the multiple distance measurements. 11. The non-transitory computer-readable medium of claim 9 further comprising: determining a distance between the foreground object and the background surface for multiple frames of returned illumination data to produce multiple distance measurements; and averaging the multiple distance measurements. 12. The non-transitory computer-readable medium of claim 9 wherein determining a distance comprises determining a distance between the foreground object and the background surface using the width of the occlusion region and a distance between a light transmitter and a light receiver. 13. The non-transitory computer-readable medium of claim 12 wherein determining a distance comprises determining a distance between the foreground object and the background surface using the width of the occlusion region and an angle between a line formed between the light transmitter and light receiver and the background surface. 14. An apparatus comprising: a directional light transmitter to emit light pulses in a field of view; an omni-directional light receiver to receive reflections of the light pulses from a foreground object and a background surface in the field of view and to measure returned illumination data; an occlusion-based height estimation circuit to determine a distance between the foreground object and the background surface based on reflections occluded from the receiver by the foreground object. 15. The apparatus of claim 14 wherein the occlusion-based height estimation circuit determines the distance as a function of a width of an occlusion region formed by the occluded reflections. 16. The apparatus of claim 15 wherein the occlusion-based height estimation circuit further determines the distance as a function of a distance between the light transmitter and the light receiver. 17. The apparatus of claim 16 wherein the width of the occlusion region is measured in a common dimension with a line formed between the light transmitter and the light receiver. 18. The apparatus of claim 15 wherein the occlusion-based height estimation circuit further determines the distance as a function of an angle between a line formed between the light transmitter and light receiver and the background surface. 19. The apparatus of claim 14 wherein the occlusion-based height estimation circuit further determines a distance between the foreground object and the background surface at multiple points to produce multiple distance measurements, and averages the multiple distance measurements. 20. The apparatus of claim 14 wherein the occlusion-based height estimation circuit further determines a distance between the foreground object and the background surface for multiple frames of returned illumination data to produce multiple distance measurements, and averages the multiple distance measurements.
2,600
10,217
10,217
16,433,228
2,612
Several embodiments of scalable image processing systems and methods are disclosed herein whereby color management processing of source image data to be displayed on a target display is changed according to varying levels of metadata.
1. (canceled) 2. A method for processing a bitstream through metadata associated with the bitstream, said method comprising: receiving image data as the bitstream at a destination device, wherein: content encoded in the bitstream is produced on a reference display device, characteristics of the reference display device being identified in the bitstream by a set of one or more parameters, and the image data is associated with corresponding image metadata; decoding the bitstream; determining, by the destination device, if the image metadata includes a first set of metadata associated with a portion of the image data, wherein the first set of metadata includes a representation of the one or more parameters; and determining, by the destination device, if the image metadata includes a second set of metadata associated with video content characteristics of the same portion of the image data, wherein the second set of metadata includes at least a luminance level, wherein the first set of metadata includes: a. a white point for the reference display, b. three primaries for the reference display, c. a first luminance level for the reference display, and d. a second luminance level for the reference display, and wherein determining if the second set of metadata is received is independent of determining if the first set of metadata is received. 3. The method of claim 2, wherein the first set of metadata and the second set of metadata are received in the bitstream. 4. The method of claim 3, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream. 5. The method of claim 2, further comprising receiving, by the destination device, metadata characterizing ambient light conditions. 6. The method of claim 2, further comprising determining, by the destination device, if a third set of metadata is received, wherein the third set of metadata is indicative of ambient light conditions. 7. An apparatus comprising: at least one non-transitory memory; and a bitstream of video images stored on the at least one non-transitory memory, the bitstream including a first set of metadata and a second set of metadata, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream, wherein the video images are produced on a reference display device, and characteristics of the reference display device are identified in the bitstream by a set of one or more parameters, wherein the first set of metadata, which is associated with a portion of the video images, includes a representation of the one or more parameters, including at least: a. a white point for a reference display, b. three primaries for the reference display, c. a first luminance level for the reference display, and d. a second luminance level for the reference display, and wherein the second set of metadata includes at least a maximum luminance level of the same portion of the video images.
Several embodiments of scalable image processing systems and methods are disclosed herein whereby color management processing of source image data to be displayed on a target display is changed according to varying levels of metadata.1. (canceled) 2. A method for processing a bitstream through metadata associated with the bitstream, said method comprising: receiving image data as the bitstream at a destination device, wherein: content encoded in the bitstream is produced on a reference display device, characteristics of the reference display device being identified in the bitstream by a set of one or more parameters, and the image data is associated with corresponding image metadata; decoding the bitstream; determining, by the destination device, if the image metadata includes a first set of metadata associated with a portion of the image data, wherein the first set of metadata includes a representation of the one or more parameters; and determining, by the destination device, if the image metadata includes a second set of metadata associated with video content characteristics of the same portion of the image data, wherein the second set of metadata includes at least a luminance level, wherein the first set of metadata includes: a. a white point for the reference display, b. three primaries for the reference display, c. a first luminance level for the reference display, and d. a second luminance level for the reference display, and wherein determining if the second set of metadata is received is independent of determining if the first set of metadata is received. 3. The method of claim 2, wherein the first set of metadata and the second set of metadata are received in the bitstream. 4. The method of claim 3, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream. 5. The method of claim 2, further comprising receiving, by the destination device, metadata characterizing ambient light conditions. 6. The method of claim 2, further comprising determining, by the destination device, if a third set of metadata is received, wherein the third set of metadata is indicative of ambient light conditions. 7. An apparatus comprising: at least one non-transitory memory; and a bitstream of video images stored on the at least one non-transitory memory, the bitstream including a first set of metadata and a second set of metadata, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream, wherein the video images are produced on a reference display device, and characteristics of the reference display device are identified in the bitstream by a set of one or more parameters, wherein the first set of metadata, which is associated with a portion of the video images, includes a representation of the one or more parameters, including at least: a. a white point for a reference display, b. three primaries for the reference display, c. a first luminance level for the reference display, and d. a second luminance level for the reference display, and wherein the second set of metadata includes at least a maximum luminance level of the same portion of the video images.
2,600
10,218
10,218
15,979,212
2,616
Several embodiments of scalable image processing systems and methods are disclosed herein whereby color management processing of source image data to be displayed on a target display is changed according to varying levels of metadata.
1. (canceled) 2. A method for processing image data through metadata associated with the image data, said method comprising: receiving the image data as a bitstream at a destination device; decoding the image data; determining, by the destination device, if a first set of metadata associated with the image data is received, wherein the first set of metadata includes a representation of parameters of a reference display; and determining, by the destination device, if a second set of metadata for video content characteristics of the image data is received, the second set of metadata including at least a maximum luminance level of the image data, wherein the first set of metadata includes: a. a white point, represented as x, y chromaticity coordinates for the reference display, b. three primaries, each represented as x, y chromaticity coordinates for the reference display, c. a minimum luminance level for the reference display, and d. a maximum luminance level for the reference display, and wherein determining if the second set of metadata is received is independent of determining if the first set of metadata is received. 3. The method of claim 2, wherein the first set of metadata and the second set of metadata are received in the bitstream. 4. The method of claim 3, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream. 5. The method of claim 2, further comprising receiving, by the destination device, metadata characterizing ambient light conditions. 6. An apparatus comprising: at least one non-transitory memory; and a bitstream stored on the at least one non-transitory memory, the bitstream including a first set of metadata and a second set of metadata, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream; wherein the first set of metadata includes at least: a. a white point, represented as x, y chromaticity coordinates for a reference display, b. three primaries, each represented as x, y chromaticity coordinates for the reference display, c. a minimum luminance level for the reference display, and d. a maximum luminance level for the reference display; wherein the second set of metadata includes at least a maximum luminance level of the image data.
Several embodiments of scalable image processing systems and methods are disclosed herein whereby color management processing of source image data to be displayed on a target display is changed according to varying levels of metadata.1. (canceled) 2. A method for processing image data through metadata associated with the image data, said method comprising: receiving the image data as a bitstream at a destination device; decoding the image data; determining, by the destination device, if a first set of metadata associated with the image data is received, wherein the first set of metadata includes a representation of parameters of a reference display; and determining, by the destination device, if a second set of metadata for video content characteristics of the image data is received, the second set of metadata including at least a maximum luminance level of the image data, wherein the first set of metadata includes: a. a white point, represented as x, y chromaticity coordinates for the reference display, b. three primaries, each represented as x, y chromaticity coordinates for the reference display, c. a minimum luminance level for the reference display, and d. a maximum luminance level for the reference display, and wherein determining if the second set of metadata is received is independent of determining if the first set of metadata is received. 3. The method of claim 2, wherein the first set of metadata and the second set of metadata are received in the bitstream. 4. The method of claim 3, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream. 5. The method of claim 2, further comprising receiving, by the destination device, metadata characterizing ambient light conditions. 6. An apparatus comprising: at least one non-transitory memory; and a bitstream stored on the at least one non-transitory memory, the bitstream including a first set of metadata and a second set of metadata, wherein the first set of metadata and the second set of metadata are separately partitioned in the bitstream; wherein the first set of metadata includes at least: a. a white point, represented as x, y chromaticity coordinates for a reference display, b. three primaries, each represented as x, y chromaticity coordinates for the reference display, c. a minimum luminance level for the reference display, and d. a maximum luminance level for the reference display; wherein the second set of metadata includes at least a maximum luminance level of the image data.
2,600
10,219
10,219
15,102,615
2,645
A wireless communication device and method therein for handling connection state changes in a wireless communication system are disclosed. The wireless communication device comprises at least two SIMs, in which a first SIM is in a first original connection state towards a first network node, a second SIM is in a second original connection state towards a second network node. The wireless communication device first determines whether a battery power in the wireless communication device is below a first threshold. When the battery power is below the first threshold, the wireless communication device selects a SIM out of the at least two SIMs for which to change its original connection state to a third connection state based on a parameter. The third connection state is a connection state implying lower power consumption for the wireless communication device compared to its original connection state. Then the wireless communication device performs a connection state change for the selected SIM from its original connection state to the third connection state.
1. A method performed in a wireless communication device for handling connection state changes in a wireless communication system, wherein the wireless communication device comprises at least two Subscriber Identity Modules, SIMs, in which a first SIM is in a first original connection state towards a first network node, a second SIM is in a second original connection state towards a second network node, the method comprising: determining whether a battery power in the wireless communication device is below a first threshold; when the battery power is below the first threshold, selecting a SIM out of the at least two SIMs for which to change its original connection state to a third connection state based on a parameter, wherein the third connection state is a connection state implying lower power consumption for the wireless communication device compared to its original connection state; and performing a connection state change for the selected SIM from its original connection state to the third connection state. 2. The method according to claim 1, wherein more than one SIMs are selected to change their respective original connection states to other connection states which have lower power consumption than their respective original connection states. 3. The method according to claim 1, wherein the parameter is associated to at least one of: a) radio channel characteristics for the connection state associated to the respective SIMs; b) radio network capabilities for the connection state associated to the respective SIMs; c) a user behavior or Quality of Service, QoS, for the connection state associated to the respective SIMs; d) a cost for the connection state associated to the respective SIMs; e) services provided by the connection state associated to the respective SIMs; f) a random sequence associated to the respective SIMs; g) a priority of each SIM set by user, operator or a random number. 4. The method according to claim 1, wherein the first and second connection states are active mode and the third connection state is idle mode. 5. The method according to claim 1, wherein the first and second connection states are idle mode and the third connection state is a detached state. 6. The method according to claim 1, wherein the first and second connection states are active mode and the third connection state is idle mode with occasional network attachments to poll received messages. 7. The method according to claim 3, wherein the radio channel characteristics comprise one of Signal Interference Noise Ratio, SINR, Channel State Information, CSI, channel bandwidth, carrier frequency. 8. The method according to claim 3, wherein QoS comprises one of throughput, latency, targeted Block Error Rate, BLER. 9. The method according to claim 3, wherein the radio network capabilities comprise one of maximum transmission rank, modulation order, carrier aggregation capabilities, beamforming capabilities, transmission mode. 10. The method according to claim 3, wherein the SIM with the lowest QoS is selected to change to a third connection state. 11. The method according to claim 3, wherein the SIM associated to the highest cost is selected to change to a third connection state. 12. The method according to claim 3, wherein the SIM providing a lower priority services than data or voice services is selected to change to a third connection state. 13. The method according to claim 3, wherein performing (203) a connection state change comprises a Radio Resource Control, RRC, signaling to the network node associated to the selected SIM. 14. The method according to claim 3, wherein the connection state change comprises a change of radio access technology. 15. The method according to claim 3, further comprising: performing a connection state change for the selected SIM from the third connection state to a fourth connection state when it is determined that a battery of the wireless communication device is in charging or the battery power is larger than a second threshold, wherein the fourth connection state has higher power consumption for the wireless communication device compared to the third connection state. 16. The method according to 15, wherein the fourth connection state is the original connection state of the selected SIM. 17. A wireless communication device for handling connection state changes in a wireless communication system, wherein the wireless communication device comprises at least two Subscriber Identity Modules, SIMs, in which a first SIM is in a first original connection state towards a first network node, a second SIM is in a second original connection state towards a second network node, the wireless communication device is configured to: determine whether a battery power in the wireless communication device is below a first threshold; when the battery power is below the first threshold, select a SIM out of the at least two SIMs for which to change its original connection state to a third connection state based on a parameter, wherein the third connection state is a connection state implying lower power consumption for the wireless communication device compared to its original connection state; and perform a connection state change for the selected SIM from its original connection state to the third connection state. 18. The wireless communication device according to claim 17, wherein more than one SIMs are selected to change their respective original connection states to other connection states which have lower power consumption than their respective original connection states. 19. The wireless communication device according to claim 17, wherein the parameter is associated to at least one of: h) radio channel characteristics for the connection state associated to the respective SIMs; i) radio network capabilities for the connection state associated to the respective SIMs; j) a user behavior or Quality of Service, QoS, for the connection state associated to the respective SIMs; k) a cost for the connection state associated to the respective SIMs; l) services provided by the connection state associated to the respective SIMs; m) a random sequence associated to the respective SIMs; n) a priority of each SIM set by user, operator or a random number. 20. The wireless communication device according to claim 17, wherein the first and second connection states are active mode and the third connection state is idle mode. 21. The wireless communication device according to claim 17, wherein the first and second connection states are idle mode and the third connection state is a detached state. 22. The wireless communication device according to claim 17, wherein the first and second connection states are active mode and the third connection state is idle mode with occasional network attachments to poll received messages. 23. The wireless communication device according to claim 19, wherein the radio channel characteristics comprise one of Signal Interference Noise Ratio, SINR, Channel State Information, CSI, channel bandwidth, carrier frequency. 24. The wireless communication device according to claim 19, wherein QoS comprises one of throughput, latency, targeted Block Error Rate, BLER. 25. The wireless communication device according to claim 19, wherein the radio network capabilities comprise one of maximum transmission rank, modulation order, carrier aggregation capabilities, beamforming capabilities, transmission mode. 26. The wireless communication device according to claim 19, wherein the SIM with the lowest QoS is selected to change to a third connection state. 27. The wireless communication device according to claim 19, wherein the SIM associated to the highest cost is selected to change to a third connection state. 28. The wireless communication device according to claim 17, wherein the SIM providing a lower priority services than data or voice services is selected to change to a third connection state. 29. The wireless communication device according to claim 17, wherein the wireless communication device is configured to perform a connection state change by being configured to perform a Radio Resource Control, RRC, signaling to the network node associated to the selected SIM. 30. The wireless communication device according to claim 17, wherein the connection state change comprises a change of radio access technology. 31. The wireless communication device according to claim 17, further being configured to: perform a connection state change for the selected SIM from the third connection state to a fourth connection state when it is determined that a battery of the wireless communication device is in charging or the battery power is larger than a second threshold, wherein the fourth connection state has higher power consumption for the wireless communication device compared to the third connection state. 32. The wireless communication device according to 31, wherein the fourth connection state is the original connection state of the selected SIM.
A wireless communication device and method therein for handling connection state changes in a wireless communication system are disclosed. The wireless communication device comprises at least two SIMs, in which a first SIM is in a first original connection state towards a first network node, a second SIM is in a second original connection state towards a second network node. The wireless communication device first determines whether a battery power in the wireless communication device is below a first threshold. When the battery power is below the first threshold, the wireless communication device selects a SIM out of the at least two SIMs for which to change its original connection state to a third connection state based on a parameter. The third connection state is a connection state implying lower power consumption for the wireless communication device compared to its original connection state. Then the wireless communication device performs a connection state change for the selected SIM from its original connection state to the third connection state.1. A method performed in a wireless communication device for handling connection state changes in a wireless communication system, wherein the wireless communication device comprises at least two Subscriber Identity Modules, SIMs, in which a first SIM is in a first original connection state towards a first network node, a second SIM is in a second original connection state towards a second network node, the method comprising: determining whether a battery power in the wireless communication device is below a first threshold; when the battery power is below the first threshold, selecting a SIM out of the at least two SIMs for which to change its original connection state to a third connection state based on a parameter, wherein the third connection state is a connection state implying lower power consumption for the wireless communication device compared to its original connection state; and performing a connection state change for the selected SIM from its original connection state to the third connection state. 2. The method according to claim 1, wherein more than one SIMs are selected to change their respective original connection states to other connection states which have lower power consumption than their respective original connection states. 3. The method according to claim 1, wherein the parameter is associated to at least one of: a) radio channel characteristics for the connection state associated to the respective SIMs; b) radio network capabilities for the connection state associated to the respective SIMs; c) a user behavior or Quality of Service, QoS, for the connection state associated to the respective SIMs; d) a cost for the connection state associated to the respective SIMs; e) services provided by the connection state associated to the respective SIMs; f) a random sequence associated to the respective SIMs; g) a priority of each SIM set by user, operator or a random number. 4. The method according to claim 1, wherein the first and second connection states are active mode and the third connection state is idle mode. 5. The method according to claim 1, wherein the first and second connection states are idle mode and the third connection state is a detached state. 6. The method according to claim 1, wherein the first and second connection states are active mode and the third connection state is idle mode with occasional network attachments to poll received messages. 7. The method according to claim 3, wherein the radio channel characteristics comprise one of Signal Interference Noise Ratio, SINR, Channel State Information, CSI, channel bandwidth, carrier frequency. 8. The method according to claim 3, wherein QoS comprises one of throughput, latency, targeted Block Error Rate, BLER. 9. The method according to claim 3, wherein the radio network capabilities comprise one of maximum transmission rank, modulation order, carrier aggregation capabilities, beamforming capabilities, transmission mode. 10. The method according to claim 3, wherein the SIM with the lowest QoS is selected to change to a third connection state. 11. The method according to claim 3, wherein the SIM associated to the highest cost is selected to change to a third connection state. 12. The method according to claim 3, wherein the SIM providing a lower priority services than data or voice services is selected to change to a third connection state. 13. The method according to claim 3, wherein performing (203) a connection state change comprises a Radio Resource Control, RRC, signaling to the network node associated to the selected SIM. 14. The method according to claim 3, wherein the connection state change comprises a change of radio access technology. 15. The method according to claim 3, further comprising: performing a connection state change for the selected SIM from the third connection state to a fourth connection state when it is determined that a battery of the wireless communication device is in charging or the battery power is larger than a second threshold, wherein the fourth connection state has higher power consumption for the wireless communication device compared to the third connection state. 16. The method according to 15, wherein the fourth connection state is the original connection state of the selected SIM. 17. A wireless communication device for handling connection state changes in a wireless communication system, wherein the wireless communication device comprises at least two Subscriber Identity Modules, SIMs, in which a first SIM is in a first original connection state towards a first network node, a second SIM is in a second original connection state towards a second network node, the wireless communication device is configured to: determine whether a battery power in the wireless communication device is below a first threshold; when the battery power is below the first threshold, select a SIM out of the at least two SIMs for which to change its original connection state to a third connection state based on a parameter, wherein the third connection state is a connection state implying lower power consumption for the wireless communication device compared to its original connection state; and perform a connection state change for the selected SIM from its original connection state to the third connection state. 18. The wireless communication device according to claim 17, wherein more than one SIMs are selected to change their respective original connection states to other connection states which have lower power consumption than their respective original connection states. 19. The wireless communication device according to claim 17, wherein the parameter is associated to at least one of: h) radio channel characteristics for the connection state associated to the respective SIMs; i) radio network capabilities for the connection state associated to the respective SIMs; j) a user behavior or Quality of Service, QoS, for the connection state associated to the respective SIMs; k) a cost for the connection state associated to the respective SIMs; l) services provided by the connection state associated to the respective SIMs; m) a random sequence associated to the respective SIMs; n) a priority of each SIM set by user, operator or a random number. 20. The wireless communication device according to claim 17, wherein the first and second connection states are active mode and the third connection state is idle mode. 21. The wireless communication device according to claim 17, wherein the first and second connection states are idle mode and the third connection state is a detached state. 22. The wireless communication device according to claim 17, wherein the first and second connection states are active mode and the third connection state is idle mode with occasional network attachments to poll received messages. 23. The wireless communication device according to claim 19, wherein the radio channel characteristics comprise one of Signal Interference Noise Ratio, SINR, Channel State Information, CSI, channel bandwidth, carrier frequency. 24. The wireless communication device according to claim 19, wherein QoS comprises one of throughput, latency, targeted Block Error Rate, BLER. 25. The wireless communication device according to claim 19, wherein the radio network capabilities comprise one of maximum transmission rank, modulation order, carrier aggregation capabilities, beamforming capabilities, transmission mode. 26. The wireless communication device according to claim 19, wherein the SIM with the lowest QoS is selected to change to a third connection state. 27. The wireless communication device according to claim 19, wherein the SIM associated to the highest cost is selected to change to a third connection state. 28. The wireless communication device according to claim 17, wherein the SIM providing a lower priority services than data or voice services is selected to change to a third connection state. 29. The wireless communication device according to claim 17, wherein the wireless communication device is configured to perform a connection state change by being configured to perform a Radio Resource Control, RRC, signaling to the network node associated to the selected SIM. 30. The wireless communication device according to claim 17, wherein the connection state change comprises a change of radio access technology. 31. The wireless communication device according to claim 17, further being configured to: perform a connection state change for the selected SIM from the third connection state to a fourth connection state when it is determined that a battery of the wireless communication device is in charging or the battery power is larger than a second threshold, wherein the fourth connection state has higher power consumption for the wireless communication device compared to the third connection state. 32. The wireless communication device according to 31, wherein the fourth connection state is the original connection state of the selected SIM.
2,600
10,220
10,220
12,941,385
2,683
Automated device/system setup based on user presence information is provided. When a user of one or more electronic devices or systems moves into the presence of the one or more devices or systems, detection or determination of the user's presence may be used to apply setup or settings changes to the one or more devices or systems. The user's presence relative to the one or more devices or systems may be detected according to a variety of means. A wireless device carried by the user may be detected by a wireless presence detector. Online/offline status of a user with respect to an Internet connection may be used to detect/determine presence of the user. Use of a wireline or wireless telephone, cable television set-top box or other device connected to a services provider may be used to determine presence information for the user.
1. A method of automating setup changes to an electronic device based on user presence information, comprising: receiving presence information for a specific user of an electronic device, the presence information indicating the user is in a physical proximity of the electronic device; passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device; and in response to receiving the presence information for the user of the electronic device at the electronic device, changing one or more settings of the electronic device based on the indication that the user is in a physical proximity of the electronic device. 2. The method of claim 1, wherein changing one or more settings of the electronic device based on the indication that the user is in a physical proximity of the electronic device includes personalizing one or more functionalities of the electronic device based on one or more preferences of the user. 3. The method of claim 2, wherein personalizing one or more functionalities of the electronic device includes personalizing one or more functionalities of the electronic device based on user preferences profile information maintained for the user in association with the electronic device. 4. The method of claim 1, wherein receiving presence information for the user of the electronic device includes receiving signaling from a wireless communications device physically associated with the user at a wireless presence detector located in a physical proximity of the electronic device. 5. The method of claim 4, wherein passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device includes passing the presence information to the electronic device via the wireless presence detector. 6. The method of claim 1, wherein receiving presence information for the user of the electronic device includes detection of a wireless presence detector by a second electronic device in a possession of the user; and wherein passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device includes passing the presence information directly from the second electronic device to the electronic device for which one or more settings are changed based on the indication that the user is in a physical proximity of the electronic device. 7. The method of claim 1, wherein receiving presence information for the user of the electronic device includes detecting a location of the user relative to the electronic device based on a distance from a wireless communications device physically associated with the user to a wireless network transmission point. 8. The method of claim 7, wherein if the location of the user relative to the electronic device based on a distance from the wireless communications device physically associated with the user to the wireless network transmission point indicates that the user is within a prescribed proximity of the electronic device, indicating the user is in a physical proximity of the electronic device. 9. The method of claim 1, wherein receiving presence information for the user of the electronic device includes determining a location of the user relative to the electronic device based on global positioning satellite data for a wireless communications device physically associated with the user. 10. The method of claim 9, wherein if the location of the user relative to the electronic device based on global positioning satellite data for a wireless communications device indicates that the user is within a prescribed proximity of the electronic device, indicating the user is in a physical proximity of the electronic device. 11. The method of claim 1, wherein receiving presence information for the user of the electronic device includes receiving an indication the user is in a physical proximity of the electronic device based on use of the electronic device or use of another electronic device in the physical proximity of the electronic device, wherein use of the electronic device or use of another electronic device indicates the user is in the physical proximity of the electronic device. 12. The method of claim 11, wherein a notification of the use of the electronic device or of the use of another electronic device is passed to a network component operative to interpret the use of the electronic device or the use of the another electronic device as an indication that the user is in a physical proximity of the electronic device. 13. The method of claim 1, wherein receiving presence information for the user of the electronic device includes receiving the presence information for the user of the electronic device at a presence server operative to notify a communications system through which the electronic device operates that the user is in the physical proximity of the electronic device. 14. The method of claim 1, wherein passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device includes passing the presence information for the user of the electronic device to the electronic device via a communications system operative to change one or more settings of the electronic device based on the user's physical proximity with the electronic device. 15. A method of automating setup changes to an electronic device based on user presence information, comprising: receiving presence information for one or more users of an electronic device, the presence information indicating the one or more users are in a physical presence of the electronic device; passing the presence information for the one or more users of the electronic device to the electronic device to indicate the one or more users are in a physical presence of the electronic device; and in response to receiving the presence information for the one or more users of the electronic device at the electronic device, changing one or more settings of the electronic device based on the indication that the one or more users are in a physical presence of the electronic device, including personalizing one or more functionalities of the electronic device based on user preferences profile information maintained for the one or more users in association with the electronic device. 16. The method of claim 15, wherein changing one or more settings of the electronic device based on the indication that the one or more users are in a physical presence of the electronic device includes applying one or more settings associated with each of the one or more users to the electronic device simultaneously. 17. The method of claim 15, further comprising determining whether one or more settings of a first user takes precedence over the one or more settings of a second user; and wherein changing one or more settings of the electronic device based on the indication that the one or more users are in a physical presence of the electronic device includes applying one or more settings associated with the first user and the second user such that the one or more settings associated with the first user take precedence over application of the one or more settings associated with the second user. 18. The method of claim 15, wherein receiving presence information for the one or more users of the electronic device includes receiving an indication the one or more users are in a physical presence of the electronic device based on use of the electronic device or use of another electronic device in the physical presence of the electronic device, wherein use of the electronic device or use of another electronic device indicates the one or more users are in the physical presence of the electronic device. 19. A method of automating setup changes to an electronic device based on user presence information, comprising: receiving an indication a user is in a physical proximity of an electronic device based on use of the electronic device or use of another electronic device in the physical proximity of the electronic device; passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device; and in response to receiving the presence information for the user of the electronic device at the electronic device, changing one or more settings of the electronic device based on the indication that the user is in a physical proximity of the electronic device. 20. The method of claim 19, wherein a notification of the use of the electronic device or of the use of another electronic device is passed to a network component operative to interpret the use of the electronic device or the use of the another electronic device as an indication that the user is in a physical proximity of the electronic device.
Automated device/system setup based on user presence information is provided. When a user of one or more electronic devices or systems moves into the presence of the one or more devices or systems, detection or determination of the user's presence may be used to apply setup or settings changes to the one or more devices or systems. The user's presence relative to the one or more devices or systems may be detected according to a variety of means. A wireless device carried by the user may be detected by a wireless presence detector. Online/offline status of a user with respect to an Internet connection may be used to detect/determine presence of the user. Use of a wireline or wireless telephone, cable television set-top box or other device connected to a services provider may be used to determine presence information for the user.1. A method of automating setup changes to an electronic device based on user presence information, comprising: receiving presence information for a specific user of an electronic device, the presence information indicating the user is in a physical proximity of the electronic device; passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device; and in response to receiving the presence information for the user of the electronic device at the electronic device, changing one or more settings of the electronic device based on the indication that the user is in a physical proximity of the electronic device. 2. The method of claim 1, wherein changing one or more settings of the electronic device based on the indication that the user is in a physical proximity of the electronic device includes personalizing one or more functionalities of the electronic device based on one or more preferences of the user. 3. The method of claim 2, wherein personalizing one or more functionalities of the electronic device includes personalizing one or more functionalities of the electronic device based on user preferences profile information maintained for the user in association with the electronic device. 4. The method of claim 1, wherein receiving presence information for the user of the electronic device includes receiving signaling from a wireless communications device physically associated with the user at a wireless presence detector located in a physical proximity of the electronic device. 5. The method of claim 4, wherein passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device includes passing the presence information to the electronic device via the wireless presence detector. 6. The method of claim 1, wherein receiving presence information for the user of the electronic device includes detection of a wireless presence detector by a second electronic device in a possession of the user; and wherein passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device includes passing the presence information directly from the second electronic device to the electronic device for which one or more settings are changed based on the indication that the user is in a physical proximity of the electronic device. 7. The method of claim 1, wherein receiving presence information for the user of the electronic device includes detecting a location of the user relative to the electronic device based on a distance from a wireless communications device physically associated with the user to a wireless network transmission point. 8. The method of claim 7, wherein if the location of the user relative to the electronic device based on a distance from the wireless communications device physically associated with the user to the wireless network transmission point indicates that the user is within a prescribed proximity of the electronic device, indicating the user is in a physical proximity of the electronic device. 9. The method of claim 1, wherein receiving presence information for the user of the electronic device includes determining a location of the user relative to the electronic device based on global positioning satellite data for a wireless communications device physically associated with the user. 10. The method of claim 9, wherein if the location of the user relative to the electronic device based on global positioning satellite data for a wireless communications device indicates that the user is within a prescribed proximity of the electronic device, indicating the user is in a physical proximity of the electronic device. 11. The method of claim 1, wherein receiving presence information for the user of the electronic device includes receiving an indication the user is in a physical proximity of the electronic device based on use of the electronic device or use of another electronic device in the physical proximity of the electronic device, wherein use of the electronic device or use of another electronic device indicates the user is in the physical proximity of the electronic device. 12. The method of claim 11, wherein a notification of the use of the electronic device or of the use of another electronic device is passed to a network component operative to interpret the use of the electronic device or the use of the another electronic device as an indication that the user is in a physical proximity of the electronic device. 13. The method of claim 1, wherein receiving presence information for the user of the electronic device includes receiving the presence information for the user of the electronic device at a presence server operative to notify a communications system through which the electronic device operates that the user is in the physical proximity of the electronic device. 14. The method of claim 1, wherein passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device includes passing the presence information for the user of the electronic device to the electronic device via a communications system operative to change one or more settings of the electronic device based on the user's physical proximity with the electronic device. 15. A method of automating setup changes to an electronic device based on user presence information, comprising: receiving presence information for one or more users of an electronic device, the presence information indicating the one or more users are in a physical presence of the electronic device; passing the presence information for the one or more users of the electronic device to the electronic device to indicate the one or more users are in a physical presence of the electronic device; and in response to receiving the presence information for the one or more users of the electronic device at the electronic device, changing one or more settings of the electronic device based on the indication that the one or more users are in a physical presence of the electronic device, including personalizing one or more functionalities of the electronic device based on user preferences profile information maintained for the one or more users in association with the electronic device. 16. The method of claim 15, wherein changing one or more settings of the electronic device based on the indication that the one or more users are in a physical presence of the electronic device includes applying one or more settings associated with each of the one or more users to the electronic device simultaneously. 17. The method of claim 15, further comprising determining whether one or more settings of a first user takes precedence over the one or more settings of a second user; and wherein changing one or more settings of the electronic device based on the indication that the one or more users are in a physical presence of the electronic device includes applying one or more settings associated with the first user and the second user such that the one or more settings associated with the first user take precedence over application of the one or more settings associated with the second user. 18. The method of claim 15, wherein receiving presence information for the one or more users of the electronic device includes receiving an indication the one or more users are in a physical presence of the electronic device based on use of the electronic device or use of another electronic device in the physical presence of the electronic device, wherein use of the electronic device or use of another electronic device indicates the one or more users are in the physical presence of the electronic device. 19. A method of automating setup changes to an electronic device based on user presence information, comprising: receiving an indication a user is in a physical proximity of an electronic device based on use of the electronic device or use of another electronic device in the physical proximity of the electronic device; passing the presence information for the user of the electronic device to the electronic device to indicate the user is in a physical proximity of the electronic device; and in response to receiving the presence information for the user of the electronic device at the electronic device, changing one or more settings of the electronic device based on the indication that the user is in a physical proximity of the electronic device. 20. The method of claim 19, wherein a notification of the use of the electronic device or of the use of another electronic device is passed to a network component operative to interpret the use of the electronic device or the use of the another electronic device as an indication that the user is in a physical proximity of the electronic device.
2,600
10,221
10,221
15,538,052
2,649
A communication system has head wearable devices arranged for wireless communication in an operating environment. A first head wearable device ( 100 ) has a camera ( 120 ) for capturing an environment image and a first processor ( 110 ) arranged for detecting a further head wearable device ( 150 ) based on a viewing direction of a wearer of the first head wearable device. Then the further head wearable device is identified at an operational distance based on a remotely detectable visual property and a communication session is established between the first head wearable device and the further head wearable device. The further head wearable device has a further processor ( 160 ) for establishing the communication session. Effectively the communication session is initiated based on eye contact of the wearers of the head wearable devices. Gestures of the wearers may be detected to initiate or define the communication session.
1. A communication system, comprising: a number of head wearable devices for use in an operating environment, the head wearable devices being arranged for a wireless communication, a first head wearable device of the number of head wearable devices comprising a camera for capturing an environment image of the operating environment while the first head wearable device is worn by a first wearer, and a first processor configured for processing the environment image detecting a second head wearable device based on a viewing direction of the first wearer, the second head wearable device being worn by a second wearer the operating environment at an operational distance from the first wearer, identifying the second head wearable device at the operational distance based on a remotely detectable visual property, and establishing a communication session between the first head wearable device and the second head wearable device; the second head wearable device comprising a second processor configured for establishing the communication session between the first and the second head wearable devices. 2. The communication system as claimed in claim 1, wherein the first and second head wearable devices are for establishing a secure communication session by deriving a vector based context parameter by triangulating at least one of an angle and a distance between the first and second head wearable devices towards a common reference point and using the vector based context parameter as a common secret for deriving an encryption key. 3. The communication system as claimed in claim 1, wherein the first head wearable device comprises an inward facing camera for capturing an eye image of an eye of the first wearer, and the first processor is configured for detecting the viewing direction of the first wearer based on the eye image. 4. The communication system as claimed in claim 1, wherein the remotely detectable visual property is at least one of: a coded light emanating from the second head wearable device; a visual mark provided on the second head wearable device; a visual property of the second head wearable device; a visual property of the second wearer; a gesture of the second wearer. 5. The communication system as claimed in claim 1, wherein the first processor is configured for establishing the communication session comprising at least one of: generating a session request and sending, the session request to the second head wearable device, and receiving a session confirmation from the second head wearable device, and wherein the second processor is configured for establishing the wireless communication comprising at least one of: receiving the session request from the first head wearable device, and generating the session confirmation and sending the session confirmation to the first head wearable device. 6. The communication system as claimed in claim 1, wherein the first processor is configured for generating a session request based on detecting a gesture of the first wearer, and the second processor is configured for generating a session confirmation based on detecting a gesture of the second wearer. 7. The communication system as claimed in claim 1, wherein the first processor is configured for generating a session request based on detecting a gesture of the second wearer, and the second processor is configured for generating a session confirmation based on detecting a gesture of the first wearer. 8. The communication system as claimed in claim 1, wherein, the first processor is configured for determining a communication session type being one of: a docking session between the first head wearable device as a host and the second head wearable device as a dockee; a docking session between the first head wearable device as a dockee and the second head wearable device as the host; a voice communication session; a visual information sharing session; and a peripheral sharing session. 9. The communication system as claimed in claim 1, wherein the first processor is configured for determining a communication session type based on a gesture of the first wearer or a gesture of the second wearer. 10. The communication system as claimed in claim 1, wherein the first and second processors are configured for providing a user interaction session, the user interaction session comprising at least one of: a voice communication between the first an second wearers, information sharing or interactive use of the information between the first and second wearers, the information comprising at least one of: a text; images; a video; computer generated data; sharing of a peripheral or a part of a peripheral comprising sharing a computer screen; or taking over a display of the second head wearable device by the first head wearable device. 11. The communication system as claimed in claim 1, wherein the first processor is configured for functioning as a dockee in a docking session, and the second processor is configured for functioning as a host in the docking session, the host being configured for coupling to at least one peripheral, the host being configured for wireless docking with the first head wearable device as a dockee, and for providing access to the peripheral for the first head wearable device, the first head wearable device configured for wireless docking as the dockee of the host for accessing the peripheral. 12. A head wearable device for use in a communication system comprising: a wireless communication unit for wireless communication, a camera for capturing an environment image of the operating environment while the head wearable device is worn by a wearer, and a processor configured for processing the environment image for detecting a further head wearable device based on a viewing direction of the wearer, the further head wearable device being worn by a further wearer in the operating environment at an operational distance from the wearer, identifying the further head wearable device at the operational distance based on a remotely detectable visual property, and establishing a communication session between the head wearable device and the further head wearable device. 13. A method of communicating between a first head wearable device and a second head wearable device in a communication system, the method comprising: capturing, by a camera, an environment image of an operating environment while a first head wearable device is worn by a first wearer; processing the environment image for detecting a second head wearable device based on a viewing direction of the first wearer, the second head wearable device being worn by a second wearer in the operating environment at an operational distance from the first wearer, identifying the second head wearable device at the operational distance based on a remotely detectable visual property, and establishing a communication session between the first and second head wearable devices. 14. The method as claimed in claim 13, comprising establishing a secure communication session by deriving a vector based context parameter by triangulating at least one of an angle and a distance between the first and second head wearable devices towards a common reference point and using the vector based context parameter as a common secret for deriving an encryption key. 15. (canceled) 16. A non-transitory computer-readable medium having one or more executable instructions stored thereon, which when executed by a processor, cause the processor to perform a method for communicating between a first head wearable device and a second head wearable device in a communication system, the method comprising: capturing, by a camera, an environment image of an operating environment while a first head wearable device is worn by a first wearer; processing the environment image for detecting a second head wearable device based on a viewing direction of the first wearer, the second head wearable device being worn by a second wearer in the operating environment at an operational distance from the first wearer; identifying the second head wearable device at the operational distance based on a remotely detectable visual property; and establishing a communication session between the first and second head wearable devices. 17. The communication system as claimed in claim 1, wherein the first processor is configured for establishing the communication session by detecting a session confirmation via a gesture of the second wearer, and wherein the second processor is configured for establishing the communication session by detecting a session request via a gesture of the first wearer. 18. The method as claimed in claim IB, further comprising establishing the communication session by detecting a session confirmation via a gesture of the second wearer, and establishing the communication session by detecting a session request via a gesture of the first wearer.
A communication system has head wearable devices arranged for wireless communication in an operating environment. A first head wearable device ( 100 ) has a camera ( 120 ) for capturing an environment image and a first processor ( 110 ) arranged for detecting a further head wearable device ( 150 ) based on a viewing direction of a wearer of the first head wearable device. Then the further head wearable device is identified at an operational distance based on a remotely detectable visual property and a communication session is established between the first head wearable device and the further head wearable device. The further head wearable device has a further processor ( 160 ) for establishing the communication session. Effectively the communication session is initiated based on eye contact of the wearers of the head wearable devices. Gestures of the wearers may be detected to initiate or define the communication session.1. A communication system, comprising: a number of head wearable devices for use in an operating environment, the head wearable devices being arranged for a wireless communication, a first head wearable device of the number of head wearable devices comprising a camera for capturing an environment image of the operating environment while the first head wearable device is worn by a first wearer, and a first processor configured for processing the environment image detecting a second head wearable device based on a viewing direction of the first wearer, the second head wearable device being worn by a second wearer the operating environment at an operational distance from the first wearer, identifying the second head wearable device at the operational distance based on a remotely detectable visual property, and establishing a communication session between the first head wearable device and the second head wearable device; the second head wearable device comprising a second processor configured for establishing the communication session between the first and the second head wearable devices. 2. The communication system as claimed in claim 1, wherein the first and second head wearable devices are for establishing a secure communication session by deriving a vector based context parameter by triangulating at least one of an angle and a distance between the first and second head wearable devices towards a common reference point and using the vector based context parameter as a common secret for deriving an encryption key. 3. The communication system as claimed in claim 1, wherein the first head wearable device comprises an inward facing camera for capturing an eye image of an eye of the first wearer, and the first processor is configured for detecting the viewing direction of the first wearer based on the eye image. 4. The communication system as claimed in claim 1, wherein the remotely detectable visual property is at least one of: a coded light emanating from the second head wearable device; a visual mark provided on the second head wearable device; a visual property of the second head wearable device; a visual property of the second wearer; a gesture of the second wearer. 5. The communication system as claimed in claim 1, wherein the first processor is configured for establishing the communication session comprising at least one of: generating a session request and sending, the session request to the second head wearable device, and receiving a session confirmation from the second head wearable device, and wherein the second processor is configured for establishing the wireless communication comprising at least one of: receiving the session request from the first head wearable device, and generating the session confirmation and sending the session confirmation to the first head wearable device. 6. The communication system as claimed in claim 1, wherein the first processor is configured for generating a session request based on detecting a gesture of the first wearer, and the second processor is configured for generating a session confirmation based on detecting a gesture of the second wearer. 7. The communication system as claimed in claim 1, wherein the first processor is configured for generating a session request based on detecting a gesture of the second wearer, and the second processor is configured for generating a session confirmation based on detecting a gesture of the first wearer. 8. The communication system as claimed in claim 1, wherein, the first processor is configured for determining a communication session type being one of: a docking session between the first head wearable device as a host and the second head wearable device as a dockee; a docking session between the first head wearable device as a dockee and the second head wearable device as the host; a voice communication session; a visual information sharing session; and a peripheral sharing session. 9. The communication system as claimed in claim 1, wherein the first processor is configured for determining a communication session type based on a gesture of the first wearer or a gesture of the second wearer. 10. The communication system as claimed in claim 1, wherein the first and second processors are configured for providing a user interaction session, the user interaction session comprising at least one of: a voice communication between the first an second wearers, information sharing or interactive use of the information between the first and second wearers, the information comprising at least one of: a text; images; a video; computer generated data; sharing of a peripheral or a part of a peripheral comprising sharing a computer screen; or taking over a display of the second head wearable device by the first head wearable device. 11. The communication system as claimed in claim 1, wherein the first processor is configured for functioning as a dockee in a docking session, and the second processor is configured for functioning as a host in the docking session, the host being configured for coupling to at least one peripheral, the host being configured for wireless docking with the first head wearable device as a dockee, and for providing access to the peripheral for the first head wearable device, the first head wearable device configured for wireless docking as the dockee of the host for accessing the peripheral. 12. A head wearable device for use in a communication system comprising: a wireless communication unit for wireless communication, a camera for capturing an environment image of the operating environment while the head wearable device is worn by a wearer, and a processor configured for processing the environment image for detecting a further head wearable device based on a viewing direction of the wearer, the further head wearable device being worn by a further wearer in the operating environment at an operational distance from the wearer, identifying the further head wearable device at the operational distance based on a remotely detectable visual property, and establishing a communication session between the head wearable device and the further head wearable device. 13. A method of communicating between a first head wearable device and a second head wearable device in a communication system, the method comprising: capturing, by a camera, an environment image of an operating environment while a first head wearable device is worn by a first wearer; processing the environment image for detecting a second head wearable device based on a viewing direction of the first wearer, the second head wearable device being worn by a second wearer in the operating environment at an operational distance from the first wearer, identifying the second head wearable device at the operational distance based on a remotely detectable visual property, and establishing a communication session between the first and second head wearable devices. 14. The method as claimed in claim 13, comprising establishing a secure communication session by deriving a vector based context parameter by triangulating at least one of an angle and a distance between the first and second head wearable devices towards a common reference point and using the vector based context parameter as a common secret for deriving an encryption key. 15. (canceled) 16. A non-transitory computer-readable medium having one or more executable instructions stored thereon, which when executed by a processor, cause the processor to perform a method for communicating between a first head wearable device and a second head wearable device in a communication system, the method comprising: capturing, by a camera, an environment image of an operating environment while a first head wearable device is worn by a first wearer; processing the environment image for detecting a second head wearable device based on a viewing direction of the first wearer, the second head wearable device being worn by a second wearer in the operating environment at an operational distance from the first wearer; identifying the second head wearable device at the operational distance based on a remotely detectable visual property; and establishing a communication session between the first and second head wearable devices. 17. The communication system as claimed in claim 1, wherein the first processor is configured for establishing the communication session by detecting a session confirmation via a gesture of the second wearer, and wherein the second processor is configured for establishing the communication session by detecting a session request via a gesture of the first wearer. 18. The method as claimed in claim IB, further comprising establishing the communication session by detecting a session confirmation via a gesture of the second wearer, and establishing the communication session by detecting a session request via a gesture of the first wearer.
2,600
10,222
10,222
16,057,544
2,683
Upon receiving a signal from a remote control device, a device identifies a command data usable to communicate with a consumer device. The signal contains an indication of a pressed key, which corresponds to a function of the consumer device. The device generates a command signal having the command data for transmission to the consumer device to control the selected function of the consumer device using a format recognizable by the consumer device.
1. A universal remote controller that is controllable by a first remote control, comprising: at least one memory configured to store program instructions for universal remote controller operations; and at least one processor configured to access the at least one memory and to execute the program instructions, the program instructions comprising instructions configured to: receive a first control signal for a first control function that is transmitted by the first remote control; determine at least one other device for which the first control function is directed, the at least one other device being controllable by at least one other remote control; and transmit a second control signal for the first control function to at least one other device, thereby permitting the first remote control to control the at least one other device without the first remote control being configured with a command transmission protocol for controlling the at least one other device. 2. The universal remote controller as recited in claim 1, wherein the at least one other remote control utilizes a different type of command transmission protocol than the first remote control and wherein the second control signal is transmitted using the different type of command transmission protocol. 3. The universal remote controller as recited in claim 1, wherein the universal remote controller is embodied in an audio/visual (A/V) source device and wherein the at least one other device comprises an A/V sink device. 4. The universal remote controller as recited in claim 3, wherein the A/V source device comprises a set-top box. 5. The universal remote controller as recited in claim 3, wherein the A/V sink device comprises a television. 6. The universal remote controller as recited in claim 1, wherein the program instructions comprise instructions further configured to generate a graphical user interface wherein the graphical user interface includes options selectable by a user of the first remote control to configure the universal remote controller with the command transmission protocol for controlling the at least one other device. 7. The universal remote controller as recited in claim 1, wherein the program instructions comprise instructions further configured to download the command transmission protocol for controlling the at least one other device from a remotely located server device. 8. The universal remote controller as recited in claim 1, wherein the program instructions comprise instructions further configured to cause the universal remote controller to transmit the second control signal directly to the at least one other device. 9. The universal remote controller as recited in claim 8, wherein the command transmission protocol comprises a wireless command transmission protocol. 10. The universal remote controller as recited in claim 9, wherein the command transmission protocol comprises a radio frequency command transmission protocol. 11. A method performed by a universal remote controller, the method comprising: receiving a first control signal for a first control function that is transmitted by a first remote control; determining at least one other device for which the first control function is directed, the at least one other device being controllable by at least one other remote control; and transmitting a second control signal for the first control function to at least one other device, thereby permitting the first remote control to control the at least one other device without the first remote control being configured with a command transmission protocol for controlling the at least one other device. 12. The method as recited in claim 11, wherein the at least one other remote control utilizes a different type of command transmission protocol than the first remote control and wherein the second control signal is transmitted using the different type of command transmission protocol. 13. The method as recited in claim 11, wherein the universal remote controller is embodied in an audio/visual (A/V) source device and wherein the at least one other device comprises an A/V sink device. 14. The method as recited in claim 13, wherein the A/V source device comprises a set-top box. 15. The method as recited in claim 13, wherein the A/V sink device comprises a television. 16. The method as recited in claim 11, further comprising generating a graphical user interface wherein the graphical user interface includes options selectable by a user of the first remote control to configure the universal remote controller with the command transmission protocol for controlling the at least one other device. 17. The method as recited in claim 11, further comprising downloading the command transmission protocol for controlling the at least one other device from a remotely located server device. 18. The method as recited in claim 11, further comprising causing the universal remote controller to transmit the second control signal directly to the at least one other device. 19. The method as recited in claim 18, wherein the command transmission protocol comprises a wireless command transmission protocol. 20. The method as recited in claim 19, wherein the command transmission protocol comprises a radio frequency command transmission protocol.
Upon receiving a signal from a remote control device, a device identifies a command data usable to communicate with a consumer device. The signal contains an indication of a pressed key, which corresponds to a function of the consumer device. The device generates a command signal having the command data for transmission to the consumer device to control the selected function of the consumer device using a format recognizable by the consumer device.1. A universal remote controller that is controllable by a first remote control, comprising: at least one memory configured to store program instructions for universal remote controller operations; and at least one processor configured to access the at least one memory and to execute the program instructions, the program instructions comprising instructions configured to: receive a first control signal for a first control function that is transmitted by the first remote control; determine at least one other device for which the first control function is directed, the at least one other device being controllable by at least one other remote control; and transmit a second control signal for the first control function to at least one other device, thereby permitting the first remote control to control the at least one other device without the first remote control being configured with a command transmission protocol for controlling the at least one other device. 2. The universal remote controller as recited in claim 1, wherein the at least one other remote control utilizes a different type of command transmission protocol than the first remote control and wherein the second control signal is transmitted using the different type of command transmission protocol. 3. The universal remote controller as recited in claim 1, wherein the universal remote controller is embodied in an audio/visual (A/V) source device and wherein the at least one other device comprises an A/V sink device. 4. The universal remote controller as recited in claim 3, wherein the A/V source device comprises a set-top box. 5. The universal remote controller as recited in claim 3, wherein the A/V sink device comprises a television. 6. The universal remote controller as recited in claim 1, wherein the program instructions comprise instructions further configured to generate a graphical user interface wherein the graphical user interface includes options selectable by a user of the first remote control to configure the universal remote controller with the command transmission protocol for controlling the at least one other device. 7. The universal remote controller as recited in claim 1, wherein the program instructions comprise instructions further configured to download the command transmission protocol for controlling the at least one other device from a remotely located server device. 8. The universal remote controller as recited in claim 1, wherein the program instructions comprise instructions further configured to cause the universal remote controller to transmit the second control signal directly to the at least one other device. 9. The universal remote controller as recited in claim 8, wherein the command transmission protocol comprises a wireless command transmission protocol. 10. The universal remote controller as recited in claim 9, wherein the command transmission protocol comprises a radio frequency command transmission protocol. 11. A method performed by a universal remote controller, the method comprising: receiving a first control signal for a first control function that is transmitted by a first remote control; determining at least one other device for which the first control function is directed, the at least one other device being controllable by at least one other remote control; and transmitting a second control signal for the first control function to at least one other device, thereby permitting the first remote control to control the at least one other device without the first remote control being configured with a command transmission protocol for controlling the at least one other device. 12. The method as recited in claim 11, wherein the at least one other remote control utilizes a different type of command transmission protocol than the first remote control and wherein the second control signal is transmitted using the different type of command transmission protocol. 13. The method as recited in claim 11, wherein the universal remote controller is embodied in an audio/visual (A/V) source device and wherein the at least one other device comprises an A/V sink device. 14. The method as recited in claim 13, wherein the A/V source device comprises a set-top box. 15. The method as recited in claim 13, wherein the A/V sink device comprises a television. 16. The method as recited in claim 11, further comprising generating a graphical user interface wherein the graphical user interface includes options selectable by a user of the first remote control to configure the universal remote controller with the command transmission protocol for controlling the at least one other device. 17. The method as recited in claim 11, further comprising downloading the command transmission protocol for controlling the at least one other device from a remotely located server device. 18. The method as recited in claim 11, further comprising causing the universal remote controller to transmit the second control signal directly to the at least one other device. 19. The method as recited in claim 18, wherein the command transmission protocol comprises a wireless command transmission protocol. 20. The method as recited in claim 19, wherein the command transmission protocol comprises a radio frequency command transmission protocol.
2,600
10,223
10,223
16,522,851
2,664
A captured image is analyzed and it is determined whether a land is used for the same use as a registered use. A land use determination system for determining whether a land is used for a same use as a registered use includes an image acquiring unit that acquires an image obtained by image-capturing a land, an address specifying unit that analyzes the image to specify an address of the land, a use specifying unit that analyzes the image to specify a use for which the land is used, and a determining unit that determines whether a registered use of the land based on the address matches the use of the land specified by the analysis.
1. A land use determination system for determining whether a land is used for a same use as a registered use, comprising: an image acquiring unit that acquires an image obtained by image-capturing a land; an address specifying unit that analyzes the image to specify an address of the land; a use specifying unit that analyzes the image to specify a use for which the land is used; and a determining unit that determines whether a registered use of the land based on the address matches the use of the land specified by the analysis. 2. The land use determination system according to claim 1, wherein the use specifying unit specifies the use for which the land is used, by analyzing the image to specify a type of crop grown on the land. 3. The land use determination system according to claim 1, wherein the use specifying unit specifies the use for which the land is used, by analyzing the image to specify a growth status of a crop grown on the land. 4. The land use determination system according to claim 1, wherein the use specifying unit specifies the use for which the land is used, by analyzing the image to specify a building built on the land. 5. The land use determination system according to claim 1, wherein the determining unit determines whether the registered use of the land based on the address matches the use of the land specified by the analysis, by referring to a resident register. 6. The land use determination system according to claim 1, wherein the determining unit determines whether the registered use of the land based on the address matches the use of the land specified by the analysis, by referring to a certified copy of a register. 7. The land use determination system according to claim 1, wherein the image acquiring unit acquires time-series images of the image-captured land, wherein the use specifying unit analyzes the time-series images to specify a change in the use for which the land is used, and wherein the determining unit determines whether the registered use of the land based on the address matches the changed use of the land specified by the analysis. 8. A land use determination method of determining whether a land is used for a same use as a registered use, comprising: acquiring an image obtained by image-capturing a land; analyzing the image to specify an address of the land; analyzing the image to specify a use for which the land is used; and determining whether a registered use of the land based on the address matches the use of the land specified by the analysis. 9. A program for causing a computer to execute: acquiring an image obtained by image-capturing a land; analyzing the image to specify an address of the land; analyzing the image to specify a use for which the land is used; and determining whether a registered use of the land based on the address matches the use of the land specified by the analysis.
A captured image is analyzed and it is determined whether a land is used for the same use as a registered use. A land use determination system for determining whether a land is used for a same use as a registered use includes an image acquiring unit that acquires an image obtained by image-capturing a land, an address specifying unit that analyzes the image to specify an address of the land, a use specifying unit that analyzes the image to specify a use for which the land is used, and a determining unit that determines whether a registered use of the land based on the address matches the use of the land specified by the analysis.1. A land use determination system for determining whether a land is used for a same use as a registered use, comprising: an image acquiring unit that acquires an image obtained by image-capturing a land; an address specifying unit that analyzes the image to specify an address of the land; a use specifying unit that analyzes the image to specify a use for which the land is used; and a determining unit that determines whether a registered use of the land based on the address matches the use of the land specified by the analysis. 2. The land use determination system according to claim 1, wherein the use specifying unit specifies the use for which the land is used, by analyzing the image to specify a type of crop grown on the land. 3. The land use determination system according to claim 1, wherein the use specifying unit specifies the use for which the land is used, by analyzing the image to specify a growth status of a crop grown on the land. 4. The land use determination system according to claim 1, wherein the use specifying unit specifies the use for which the land is used, by analyzing the image to specify a building built on the land. 5. The land use determination system according to claim 1, wherein the determining unit determines whether the registered use of the land based on the address matches the use of the land specified by the analysis, by referring to a resident register. 6. The land use determination system according to claim 1, wherein the determining unit determines whether the registered use of the land based on the address matches the use of the land specified by the analysis, by referring to a certified copy of a register. 7. The land use determination system according to claim 1, wherein the image acquiring unit acquires time-series images of the image-captured land, wherein the use specifying unit analyzes the time-series images to specify a change in the use for which the land is used, and wherein the determining unit determines whether the registered use of the land based on the address matches the changed use of the land specified by the analysis. 8. A land use determination method of determining whether a land is used for a same use as a registered use, comprising: acquiring an image obtained by image-capturing a land; analyzing the image to specify an address of the land; analyzing the image to specify a use for which the land is used; and determining whether a registered use of the land based on the address matches the use of the land specified by the analysis. 9. A program for causing a computer to execute: acquiring an image obtained by image-capturing a land; analyzing the image to specify an address of the land; analyzing the image to specify a use for which the land is used; and determining whether a registered use of the land based on the address matches the use of the land specified by the analysis.
2,600
10,224
10,224
14,576,454
2,646
A method for using a mobile communication device to purchase a ticket for an event. The method comprises locating an event of interest to a user through the mobile communication device, displaying a seating map of a venue in which the event of interest is being held, the seating map being displayed on the mobile communication device, receiving user input selecting an available seat on the seating map, issuing an electronic ticket to the user for the seat, and storing the electronic ticket on the mobile communication device.
1. A method for conducting a near field communication transaction, the method comprising: receiving at a server transaction data associated with a near field communication (NFC) transaction from an NFC terminal; processing the NFC transaction at the server using the transaction data from the NFC terminal that is transmitted by a secure element embedded within the body of a mobile device comprising a mobile device processor, mobile device memory, and mobile device transceiver, the secure element operable to maintain a secure element application in a secure element memory and a secure element processor that initiates, in response to a near field communication interaction of the secure element with the NFC terminal using a transceiver capable of near field communications, execution of the secure element application, wherein the secure element sends the transaction data to the NFC terminal using the NFC transceiver. 2. The method of claim 1, wherein the secure element application is an identity application and the transaction data includes identity credentials. 3. The method of claim 1, wherein the NFC terminal is a point-of-entry terminal. 4. The method of claim 1, wherein the secure element application is a ticket application and the transaction data includes payment credentials. 5. The method of claim 1, wherein the NFC terminal is a point-of-sale terminal. 6. The method of claim 1, wherein the secure element application is a coupon application and the transaction data includes coupon information. 7. The method of claim 1, wherein the NFC terminal is a point-of-sale terminal. 8. The method of claim 1, wherein the NFC terminal transmits digital artifacts to the mobile device after the NFC transaction 9. The method of claim 8, where digital artifacts comprises one or more of tickets, coupons, and receipts. 10. The method of claim 1, further wherein the server authenticates a user after the transaction data has been transferred from the NFC terminal, but prior to the transaction being processed and completed, wherein the NFC terminal receives manual user authentication based on a termination by the NFC terminal that manual user authentication is required and prompts the user. 11. A server for conducting a near field communication transaction, the server comprising: a server memory; a server interface coupled to the server memory, the server interface operable to receive transaction data associated with a near field communication (NFC) transaction with an NFC terminal, a server processor configured to conduct the near field communication (NFC) transaction with the NFC terminal and process the transaction using the transaction data transmitted to the NFC terminal by a secure element embedded within the body of a mobile device comprising a mobile device processor, mobile device memory, and mobile device transceiver wherein the secure element is operable to maintain a secure element application in a secure element memory and a secure element processor that initiates in response to a near field communication interaction of the secure element with the NFC terminal using a transceiver capable of near field communications, execution of the secure element application wherein the secure element sends transaction data to the NFC terminal using the NFC transceiver. 11. The server of claim 11, wherein the secure element application is an identity application and the transaction data includes identity credentials. 12. The server of claim 11, wherein the NFC terminal is a point-of-entry terminal. 13. The server of claim 11, wherein the secure element application is a ticket application and the transaction data includes payment credentials. 14. The server of claim 11, wherein the NFC terminal is a point-of-sale terminal. 15. The server of claim 11, wherein the secure element application is a coupon application and the transaction data includes coupon information. 16. The server of claim 11, wherein the NFC terminal is a point-of-sale terminal. 17. The server of claim 11, wherein the NFC terminal transmits digital artifacts to the mobile device after the NFC transaction 18. The server of claim 17, where digital artifacts comprises one or more of tickets, coupons, and receipts. 19. The server of claim 11, further wherein the server authenticates a user after the transaction data has been transferred from the NFC terminal, but prior to the transaction being processed and completed, wherein the NFC terminal receives manual user authentication based on a termination by the NFC terminal that manual user authentication is required and prompts the user. 20. A computer readable medium for conducting a near field communication transaction, the method comprising: computer code for receiving at a server transaction data associated with a near field communication (NFC) transaction from an NFC terminal; computer code for processing the NFC transaction at the server using the transaction data from the NFC terminal that is transmitted by a secure element embedded within the body of a mobile device comprising a mobile device processor, mobile device memory, and mobile device transceiver, the secure element operable to maintain a secure element application in a secure element memory and a secure element processor that initiates, in response to a near field communication interaction of the secure element with the NFC terminal using a transceiver capable of near field communications, execution of the secure element application, wherein the secure element sends the transaction data to the NFC terminal using the NFC transceiver.
A method for using a mobile communication device to purchase a ticket for an event. The method comprises locating an event of interest to a user through the mobile communication device, displaying a seating map of a venue in which the event of interest is being held, the seating map being displayed on the mobile communication device, receiving user input selecting an available seat on the seating map, issuing an electronic ticket to the user for the seat, and storing the electronic ticket on the mobile communication device.1. A method for conducting a near field communication transaction, the method comprising: receiving at a server transaction data associated with a near field communication (NFC) transaction from an NFC terminal; processing the NFC transaction at the server using the transaction data from the NFC terminal that is transmitted by a secure element embedded within the body of a mobile device comprising a mobile device processor, mobile device memory, and mobile device transceiver, the secure element operable to maintain a secure element application in a secure element memory and a secure element processor that initiates, in response to a near field communication interaction of the secure element with the NFC terminal using a transceiver capable of near field communications, execution of the secure element application, wherein the secure element sends the transaction data to the NFC terminal using the NFC transceiver. 2. The method of claim 1, wherein the secure element application is an identity application and the transaction data includes identity credentials. 3. The method of claim 1, wherein the NFC terminal is a point-of-entry terminal. 4. The method of claim 1, wherein the secure element application is a ticket application and the transaction data includes payment credentials. 5. The method of claim 1, wherein the NFC terminal is a point-of-sale terminal. 6. The method of claim 1, wherein the secure element application is a coupon application and the transaction data includes coupon information. 7. The method of claim 1, wherein the NFC terminal is a point-of-sale terminal. 8. The method of claim 1, wherein the NFC terminal transmits digital artifacts to the mobile device after the NFC transaction 9. The method of claim 8, where digital artifacts comprises one or more of tickets, coupons, and receipts. 10. The method of claim 1, further wherein the server authenticates a user after the transaction data has been transferred from the NFC terminal, but prior to the transaction being processed and completed, wherein the NFC terminal receives manual user authentication based on a termination by the NFC terminal that manual user authentication is required and prompts the user. 11. A server for conducting a near field communication transaction, the server comprising: a server memory; a server interface coupled to the server memory, the server interface operable to receive transaction data associated with a near field communication (NFC) transaction with an NFC terminal, a server processor configured to conduct the near field communication (NFC) transaction with the NFC terminal and process the transaction using the transaction data transmitted to the NFC terminal by a secure element embedded within the body of a mobile device comprising a mobile device processor, mobile device memory, and mobile device transceiver wherein the secure element is operable to maintain a secure element application in a secure element memory and a secure element processor that initiates in response to a near field communication interaction of the secure element with the NFC terminal using a transceiver capable of near field communications, execution of the secure element application wherein the secure element sends transaction data to the NFC terminal using the NFC transceiver. 11. The server of claim 11, wherein the secure element application is an identity application and the transaction data includes identity credentials. 12. The server of claim 11, wherein the NFC terminal is a point-of-entry terminal. 13. The server of claim 11, wherein the secure element application is a ticket application and the transaction data includes payment credentials. 14. The server of claim 11, wherein the NFC terminal is a point-of-sale terminal. 15. The server of claim 11, wherein the secure element application is a coupon application and the transaction data includes coupon information. 16. The server of claim 11, wherein the NFC terminal is a point-of-sale terminal. 17. The server of claim 11, wherein the NFC terminal transmits digital artifacts to the mobile device after the NFC transaction 18. The server of claim 17, where digital artifacts comprises one or more of tickets, coupons, and receipts. 19. The server of claim 11, further wherein the server authenticates a user after the transaction data has been transferred from the NFC terminal, but prior to the transaction being processed and completed, wherein the NFC terminal receives manual user authentication based on a termination by the NFC terminal that manual user authentication is required and prompts the user. 20. A computer readable medium for conducting a near field communication transaction, the method comprising: computer code for receiving at a server transaction data associated with a near field communication (NFC) transaction from an NFC terminal; computer code for processing the NFC transaction at the server using the transaction data from the NFC terminal that is transmitted by a secure element embedded within the body of a mobile device comprising a mobile device processor, mobile device memory, and mobile device transceiver, the secure element operable to maintain a secure element application in a secure element memory and a secure element processor that initiates, in response to a near field communication interaction of the secure element with the NFC terminal using a transceiver capable of near field communications, execution of the secure element application, wherein the secure element sends the transaction data to the NFC terminal using the NFC transceiver.
2,600
10,225
10,225
15,156,847
2,668
A user recognition method and apparatus, the user recognition method including performing a liveness test by extracting a first feature of a first image acquired by capturing a user, and recognizing the user by extracting a second feature of the first image based on a result of the liveness test, is provided.
1. A user recognition method comprising: receiving a first image acquired by capturing a user; performing a liveness test by extracting a first feature of the first image; and recognizing the user by extracting a second feature of the first image based on a result of the liveness test. 2. The user recognition method of claim 1, further comprising, in a case in which the result of the liveness test corresponds to a failed test: receiving a new image; performing a liveness test by extracting a first feature of the new image; and recognizing the user by extracting a second feature of the new image based on a result of the liveness test with respect to the new image. 3. The user recognition method of claim 2, wherein the first image corresponds to a first frame in a video, and the new image corresponds to a second frame in the video. 4. The user recognition method of claim 1, wherein the performing comprises: generating a second image by diffusing a plurality of pixels included in the first image; calculating diffusion speeds of the pixels based on a difference between the first image and the second image; and extracting the first feature based on the diffusion speeds. 5. The user recognition method of claim 4, wherein the generating comprises: iteratively updating value of the pixels using a diffusion equation. 6. The user recognition method of claim 4, wherein the extracting of the first feature comprises: estimating a surface property related to an object included in the first image based on the diffusion speeds, wherein the surface property comprises at least one of a light-reflective property of a surface of the object, a number of dimensions of the surface of the object, or a material of the surface of the object. 7. The user recognition method of claim 4, wherein the extracting of the first feature comprises at least one of: calculating a number of pixels corresponding to a diffusion speed greater than or equal to a first threshold, among the diffusion speeds; calculating a distribution of the pixels corresponding to the diffusion speed greater than or equal to the first threshold, among the diffusion speeds; calculating at least one of an average or a standard deviation of the diffusion speeds; or calculating a filter response based on the diffusion speeds. 8. The user recognition method of claim 4, wherein the extracting of the first feature comprises: extracting a first-scale region from the first image based on the diffusion speeds; and calculating an amount of noise components included in the first-scale region based on a difference between the first-scale region and a result of applying median filtering to the first-scale region. 9. The user recognition method of claim 1, wherein the performing comprises: determining whether an object included in the first image has a planar property or a three-dimensional (3D) structural property, based on the first feature; outputting a signal corresponding a failed test in a case in which the object is determined to have the planar property; and outputting a signal corresponding to a successful test in a case in which the object is determined to have the 3D structural property. 10. The user recognition method of claim 1, wherein the performing comprises: calculating a degree of uniformity in a distribution of light energy included in a plurality of pixels corresponding to an object included in the first image based on the first feature; outputting a signal corresponding to a failed test in a case in which the degree of uniformity in the distribution of the light energy is greater than or equal to a threshold; and outputting a signal corresponding to a successful test in a case in which the degree of uniformity in the distribution of the light energy is less than the threshold. 11. The user recognition method of claim 1, wherein the performing further comprises at least one of: determining whether the first feature corresponds to a feature related to a medium that displays a face or a feature related to an actual face; outputting a signal corresponding to a failed test in a case in which the first feature corresponds to the feature related to a medium that displays a face; or outputting a signal corresponding to a successful test in a case in which the first feature corresponds to the feature related to an actual face. 12. The user recognition method of claim 1, further comprising: receiving a user verification request for approving an electronic commerce payment; and approving the electronic commerce payment in a case in which user verification succeeds in the recognizing. 13. The user recognition method of claim 1, further comprising: receiving a user input which requires user verification; and performing an operation corresponding to the user input in a case in which user verification succeeds in the recognizing. 14. The user recognition method of claim 13, wherein the user input which requires user verification comprises at least one of a user input to unlock a screen, a user input to execute a predetermined application, a user input to execute a predetermined function in an application, or a user input to access a predetermined folder or file. 15. The user recognition method of claim 1, further comprising: receiving a user input related to a gallery including a plurality of items of contents; sorting content corresponding to a user among the plurality of items of contents in the gallery in a case in which the user is identified in the recognizing; and providing the sorted content to the user. 16. A user recognition method comprising: receiving a video acquired by capturing a user; performing a liveness test based on at least one first frame in the video; and recognizing the user based on at least one second frame in the video based on a result of the liveness test. 17. The user recognition method of claim 16, wherein the performing comprises at least one of: determining the video to be a live video in a case in which the result of the liveness test corresponds to a successful test; determining the video to be a fake video in a case in which the result of the liveness test corresponds to a failed test; or determining the video to be a live video in a case in which the at least one first frame includes a predetermined number of consecutive frames and a result of a liveness test with respect to the consecutive frames corresponds to a successful test. 18. The user recognition method of claim 16, wherein the at least one first frame differs from the at least one second frame. 19. The user recognition method of claim 16, wherein the at least one first frame includes at least one frame suitable for a liveness test, and the at least one second frame includes at least one frame suitable for user recognition. 20. A non-transitory computer-readable medium comprising a program that, when executed on a computer device, causes the computer device to perform the method of claim 1. 21. A user recognition apparatus comprising a processor configured to: perform a liveness test by extracting a first feature of a first image acquired by capturing a user; and recognize the user by extracting a second feature of the first image based on a result of the liveness test. 22. The user recognition apparatus of claim 21, wherein, in a case in which the result of the liveness test corresponds to a failed test, the processor is configured to receive a new image, perform a liveness test by extracting a first feature of the new image, and recognize the user by extracting a second feature of the new image based on a result of the liveness test with respect to the new image. 23. The user recognition apparatus of claim 22, wherein the first image corresponds to a first frame in a video, and the new image corresponds to a second frame in the video. 24. The user recognition apparatus 21, wherein the processor is configured to generate a second image by diffusing a plurality of pixels included in the first image, calculate diffusion speeds of the pixels based on a difference between the first image and the second image, and extract the first feature based on the diffusion speeds.
A user recognition method and apparatus, the user recognition method including performing a liveness test by extracting a first feature of a first image acquired by capturing a user, and recognizing the user by extracting a second feature of the first image based on a result of the liveness test, is provided.1. A user recognition method comprising: receiving a first image acquired by capturing a user; performing a liveness test by extracting a first feature of the first image; and recognizing the user by extracting a second feature of the first image based on a result of the liveness test. 2. The user recognition method of claim 1, further comprising, in a case in which the result of the liveness test corresponds to a failed test: receiving a new image; performing a liveness test by extracting a first feature of the new image; and recognizing the user by extracting a second feature of the new image based on a result of the liveness test with respect to the new image. 3. The user recognition method of claim 2, wherein the first image corresponds to a first frame in a video, and the new image corresponds to a second frame in the video. 4. The user recognition method of claim 1, wherein the performing comprises: generating a second image by diffusing a plurality of pixels included in the first image; calculating diffusion speeds of the pixels based on a difference between the first image and the second image; and extracting the first feature based on the diffusion speeds. 5. The user recognition method of claim 4, wherein the generating comprises: iteratively updating value of the pixels using a diffusion equation. 6. The user recognition method of claim 4, wherein the extracting of the first feature comprises: estimating a surface property related to an object included in the first image based on the diffusion speeds, wherein the surface property comprises at least one of a light-reflective property of a surface of the object, a number of dimensions of the surface of the object, or a material of the surface of the object. 7. The user recognition method of claim 4, wherein the extracting of the first feature comprises at least one of: calculating a number of pixels corresponding to a diffusion speed greater than or equal to a first threshold, among the diffusion speeds; calculating a distribution of the pixels corresponding to the diffusion speed greater than or equal to the first threshold, among the diffusion speeds; calculating at least one of an average or a standard deviation of the diffusion speeds; or calculating a filter response based on the diffusion speeds. 8. The user recognition method of claim 4, wherein the extracting of the first feature comprises: extracting a first-scale region from the first image based on the diffusion speeds; and calculating an amount of noise components included in the first-scale region based on a difference between the first-scale region and a result of applying median filtering to the first-scale region. 9. The user recognition method of claim 1, wherein the performing comprises: determining whether an object included in the first image has a planar property or a three-dimensional (3D) structural property, based on the first feature; outputting a signal corresponding a failed test in a case in which the object is determined to have the planar property; and outputting a signal corresponding to a successful test in a case in which the object is determined to have the 3D structural property. 10. The user recognition method of claim 1, wherein the performing comprises: calculating a degree of uniformity in a distribution of light energy included in a plurality of pixels corresponding to an object included in the first image based on the first feature; outputting a signal corresponding to a failed test in a case in which the degree of uniformity in the distribution of the light energy is greater than or equal to a threshold; and outputting a signal corresponding to a successful test in a case in which the degree of uniformity in the distribution of the light energy is less than the threshold. 11. The user recognition method of claim 1, wherein the performing further comprises at least one of: determining whether the first feature corresponds to a feature related to a medium that displays a face or a feature related to an actual face; outputting a signal corresponding to a failed test in a case in which the first feature corresponds to the feature related to a medium that displays a face; or outputting a signal corresponding to a successful test in a case in which the first feature corresponds to the feature related to an actual face. 12. The user recognition method of claim 1, further comprising: receiving a user verification request for approving an electronic commerce payment; and approving the electronic commerce payment in a case in which user verification succeeds in the recognizing. 13. The user recognition method of claim 1, further comprising: receiving a user input which requires user verification; and performing an operation corresponding to the user input in a case in which user verification succeeds in the recognizing. 14. The user recognition method of claim 13, wherein the user input which requires user verification comprises at least one of a user input to unlock a screen, a user input to execute a predetermined application, a user input to execute a predetermined function in an application, or a user input to access a predetermined folder or file. 15. The user recognition method of claim 1, further comprising: receiving a user input related to a gallery including a plurality of items of contents; sorting content corresponding to a user among the plurality of items of contents in the gallery in a case in which the user is identified in the recognizing; and providing the sorted content to the user. 16. A user recognition method comprising: receiving a video acquired by capturing a user; performing a liveness test based on at least one first frame in the video; and recognizing the user based on at least one second frame in the video based on a result of the liveness test. 17. The user recognition method of claim 16, wherein the performing comprises at least one of: determining the video to be a live video in a case in which the result of the liveness test corresponds to a successful test; determining the video to be a fake video in a case in which the result of the liveness test corresponds to a failed test; or determining the video to be a live video in a case in which the at least one first frame includes a predetermined number of consecutive frames and a result of a liveness test with respect to the consecutive frames corresponds to a successful test. 18. The user recognition method of claim 16, wherein the at least one first frame differs from the at least one second frame. 19. The user recognition method of claim 16, wherein the at least one first frame includes at least one frame suitable for a liveness test, and the at least one second frame includes at least one frame suitable for user recognition. 20. A non-transitory computer-readable medium comprising a program that, when executed on a computer device, causes the computer device to perform the method of claim 1. 21. A user recognition apparatus comprising a processor configured to: perform a liveness test by extracting a first feature of a first image acquired by capturing a user; and recognize the user by extracting a second feature of the first image based on a result of the liveness test. 22. The user recognition apparatus of claim 21, wherein, in a case in which the result of the liveness test corresponds to a failed test, the processor is configured to receive a new image, perform a liveness test by extracting a first feature of the new image, and recognize the user by extracting a second feature of the new image based on a result of the liveness test with respect to the new image. 23. The user recognition apparatus of claim 22, wherein the first image corresponds to a first frame in a video, and the new image corresponds to a second frame in the video. 24. The user recognition apparatus 21, wherein the processor is configured to generate a second image by diffusing a plurality of pixels included in the first image, calculate diffusion speeds of the pixels based on a difference between the first image and the second image, and extract the first feature based on the diffusion speeds.
2,600
10,226
10,226
15,479,695
2,657
A method and system in which a transcription of multi-party communication is provided. A plurality of speakers are recorded using any of a variety of recording devices. A copy of the recording is processed through a diarisation process to create a final diarisation product and a second copy of the recording is processed through a transcription process to create a final transcription product. The final diarisation product is used to differentiate individual speakers of the plurality of speakers in a final transcript. The final transcript and audio samples of each of the voice prints identified through the diarisation process are presented to a reviewer to determine the identity of each of the differentiated individual speakers. The identity of the each of the differentiated individual speakers is then inserted into the final transcript.
1. A method for transcribing multi-party communication, comprising: recording a plurality of speakers; processing a first copy of the recording through a diarisation process in which an audio stream is partitioned into audio samples according to speaker identity to create a final diarisation product; processing a second copy of the recording through a transcription process in which the recording is transcribed into text to create a final transcription product; using the final diarisation product to differentiate individual speakers of the plurality of speakers in a final transcript; presenting the final transcript and audio samples of each voice print identified through the diarisation process to a reviewer to identify each of the differentiated individual speakers; and inserting the identity of the each of the differentiated individual speakers into the final transcript. 2. The method of claim 1, wherein the diarisation process is a combination of speaker segmentation and speaker clustering. 3. The method of claim 1, wherein the plurality of speakers are recorded over a conference bridge. 4. The method of claim 1, wherein the plurality of speakers are recorded using a single audio recording device. 5. The method of claim 1, wherein processing a first copy of the recording through the diarisation process and processing a second copy of the recording through the transcription process occur simultaneously. 6. The method of claim 1, wherein the reviewer is a human. 7. The method of claim 1, wherein the reviewer reviews audio samples to identify each of the plurality of speakers. 8. A system for transcribing multi-party communication, comprising: a recording device for recording a plurality of speakers; a first processor for processing a first copy of the recording through a diarisation process in which an audio stream is partitioned into audio samples according to speaker identity to create a final diarisation product; a second processor for processing a second copy of the recording through a transcription process in which the recording is transcribed into text to create a final transcription product, wherein the final diarisation product are used to differentiate individual speakers of the plurality of speakers in a final transcript; and a reviewer who is presented with the final transcript and audio samples of each voice print identified through the diarisation process to identify each of the differentiated individual speakers, wherein the identity of the each of the differentiated individual speakers is inserted into the final transcript. 9. The system of claim 8, wherein the diarisation process is a combination of speaker segmentation and speaker clustering. 10. The system of claim 8, wherein the recording device includes a conference bridge. 11. The system of claim 8, wherein processing a first copy of the recording through the diarisation process and processing a second copy of the recording through the transcription process occur simultaneously. 12. The system of claim 8, wherein the reviewer is a human. 13. The system of claim 8, wherein the reviewer reviews audio samples to identify each of the plurality of speakers.
A method and system in which a transcription of multi-party communication is provided. A plurality of speakers are recorded using any of a variety of recording devices. A copy of the recording is processed through a diarisation process to create a final diarisation product and a second copy of the recording is processed through a transcription process to create a final transcription product. The final diarisation product is used to differentiate individual speakers of the plurality of speakers in a final transcript. The final transcript and audio samples of each of the voice prints identified through the diarisation process are presented to a reviewer to determine the identity of each of the differentiated individual speakers. The identity of the each of the differentiated individual speakers is then inserted into the final transcript.1. A method for transcribing multi-party communication, comprising: recording a plurality of speakers; processing a first copy of the recording through a diarisation process in which an audio stream is partitioned into audio samples according to speaker identity to create a final diarisation product; processing a second copy of the recording through a transcription process in which the recording is transcribed into text to create a final transcription product; using the final diarisation product to differentiate individual speakers of the plurality of speakers in a final transcript; presenting the final transcript and audio samples of each voice print identified through the diarisation process to a reviewer to identify each of the differentiated individual speakers; and inserting the identity of the each of the differentiated individual speakers into the final transcript. 2. The method of claim 1, wherein the diarisation process is a combination of speaker segmentation and speaker clustering. 3. The method of claim 1, wherein the plurality of speakers are recorded over a conference bridge. 4. The method of claim 1, wherein the plurality of speakers are recorded using a single audio recording device. 5. The method of claim 1, wherein processing a first copy of the recording through the diarisation process and processing a second copy of the recording through the transcription process occur simultaneously. 6. The method of claim 1, wherein the reviewer is a human. 7. The method of claim 1, wherein the reviewer reviews audio samples to identify each of the plurality of speakers. 8. A system for transcribing multi-party communication, comprising: a recording device for recording a plurality of speakers; a first processor for processing a first copy of the recording through a diarisation process in which an audio stream is partitioned into audio samples according to speaker identity to create a final diarisation product; a second processor for processing a second copy of the recording through a transcription process in which the recording is transcribed into text to create a final transcription product, wherein the final diarisation product are used to differentiate individual speakers of the plurality of speakers in a final transcript; and a reviewer who is presented with the final transcript and audio samples of each voice print identified through the diarisation process to identify each of the differentiated individual speakers, wherein the identity of the each of the differentiated individual speakers is inserted into the final transcript. 9. The system of claim 8, wherein the diarisation process is a combination of speaker segmentation and speaker clustering. 10. The system of claim 8, wherein the recording device includes a conference bridge. 11. The system of claim 8, wherein processing a first copy of the recording through the diarisation process and processing a second copy of the recording through the transcription process occur simultaneously. 12. The system of claim 8, wherein the reviewer is a human. 13. The system of claim 8, wherein the reviewer reviews audio samples to identify each of the plurality of speakers.
2,600
10,227
10,227
14,550,590
2,696
An image sensor device includes a substrate, a die, and an adhesive layer positioned between the substrate and the die. The substrate includes a first side having a curved surface. The die includes an image sensor component attached to the curved surface of the substrate. At least a portion of the die comprising the image sensor component has a curved surface. The adhesive layer positioned between the curved surface of the substrate and the die provides a fixed attachment between the die and the substrate.
1. A camera, the camera comprising: one or more lenses for directing light to an image sensor component of the camera; and an image sensor device, wherein the image sensor device comprises a substrate, wherein the substrate comprises: a first side having a curved surface; a die comprising an image sensor component attached to the curved surface of the substrate, wherein at least a portion of the die comprising the image sensor component has a curved surface. 2. The camera of claim 1, wherein, the at least a portion of the die comprising the image sensor component having a curved surface further comprises at least a portion of the die comprising the image sensor component having a curved surface conforming to a focal radius of one of the one or more lenses for directing light to the image sensor component, wherein the at least a portion of the die comprising the image sensor component having a curved surface includes a curved surface bent after separation of the die from other dice of a wafer to conform to a focal radius of one of the one or more lenses for directing light to the image sensor component. 3. The camera of claim 1, further comprising: a heat-cured adhesive layer positioned between the curved surface of the substrate and the die, wherein the heat-cured adhesive layer provides a fixed attachment between the die and the substrate. 4. The camera of claim 1, wherein, the substrate further comprises: a second side having a flat surface for attachment of the substrate to an articulating component for articulating the lens to the image sensor. 5. The camera of claim 1, wherein the first side having a curved surface further comprises at least a portion of the substrate having a curved surface conforming to a focal radius of one of the one or more lenses for directing light to the image sensor component. 6. An image sensor device, the image sensor device comprising: a substrate, wherein the substrate comprises: a first side having a curved surface; a die comprising an image sensor component attached to the curved surface of the substrate, wherein at least a portion of the die comprising the image sensor component having a curved surface; and an adhesive layer positioned between the curved surface of the substrate and the die, wherein the adhesive layer provides a fixed attachment between the die and the substrate. 7. The image sensor device of claim 6, wherein, the at least a portion of the die comprising the image sensor component having a curved surface further comprises at least a portion of the die comprising the image sensor component having a curved surface conforming to a focal radius of a lens for depositing light on the image sensor component in a camera comprising the image sensor device. 8. The image sensor device of claim 6, further comprising the adhesive layer positioned between the curved surface of the substrate and the die, further comprises a heat-cured adhesive layer for providing a fixed alignment between the die and the lens. 9. The image sensor device of claim 6, further comprising the adhesive layer positioned between the curved surface of the substrate and the die further comprises a pressure-sensitive adhesive layer for providing a fixed alignment between the die and the lens. 10. The image sensor device of claim 6, further comprising the adhesive layer positioned between the curved surface of the substrate and the die, further comprises a light-cured adhesive layer for providing a fixed alignment between the die and the lens. 11. The image sensor device of claim 6, wherein, the first side having a curved surface further comprises at least a portion of the substrate having a curved surface conforming to a focal radius of a lens for depositing light on the image sensor component in a camera comprising the image sensor device. 12. A method for manufacturing an image sensor device, the method comprising: depositing an adhesive layer onto a substrate in a stage tool, wherein the substrate comprises a curved surface positioned in the stage tool to receive the adhesive layer; depositing a die onto the adhesive layer, wherein the die contains an image sensor; and applying pressure to the die using a bond tool, wherein the applying pressure to the die further comprises applying a pressure calculated to create a curvature of the die corresponding to a curvature of the curved surface of the substrate; and curing the adhesive, wherein the curing the adhesive further comprises delivering energy to the adhesive using the bond tool. 13. The method of claim 12, wherein the depositing the adhesive layer onto a substrate in the stage tool further comprises depositing the adhesive onto a surface area smaller than a surface area of the curved surface of the substrate, and applying pressure to the die using a bond tool after separation of the dice from the wafer. 14. The method of claim 12, wherein the applying pressure to the die further comprises applying the pressure using a bond tool having a cavity on a surface of the bond tool for prevention of contact with critical areas of the die. 15. The method of claim 12, further comprising, positioning a substrate in a stage tool, wherein the positioning a substrate in the stage tool further comprises positioning the substrate with a flat side of a substrate facing a complementary surface of the stage tool and a curved surface of the substrate facing an opening of the stage tool designed for receiving the die and bond tool. 16. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool is performed subsequent to initiation of the applying pressure calculated to create the curvature of the die corresponding to the curvature of the curved surface of the substrate. 17. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool is performed prior to the applying pressure calculated to create the curvature of the die corresponding to the curvature of the curved surface of the substrate. 18. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool further comprises delivering kinetic energy to a pressure-sensitive adhesive using a strike of the bond tool. 19. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool further comprises delivering thermal energy to the adhesive using a thermal resistor located within the bond tool. 20. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool further comprises delivering light energy to the adhesive using a light source.
An image sensor device includes a substrate, a die, and an adhesive layer positioned between the substrate and the die. The substrate includes a first side having a curved surface. The die includes an image sensor component attached to the curved surface of the substrate. At least a portion of the die comprising the image sensor component has a curved surface. The adhesive layer positioned between the curved surface of the substrate and the die provides a fixed attachment between the die and the substrate.1. A camera, the camera comprising: one or more lenses for directing light to an image sensor component of the camera; and an image sensor device, wherein the image sensor device comprises a substrate, wherein the substrate comprises: a first side having a curved surface; a die comprising an image sensor component attached to the curved surface of the substrate, wherein at least a portion of the die comprising the image sensor component has a curved surface. 2. The camera of claim 1, wherein, the at least a portion of the die comprising the image sensor component having a curved surface further comprises at least a portion of the die comprising the image sensor component having a curved surface conforming to a focal radius of one of the one or more lenses for directing light to the image sensor component, wherein the at least a portion of the die comprising the image sensor component having a curved surface includes a curved surface bent after separation of the die from other dice of a wafer to conform to a focal radius of one of the one or more lenses for directing light to the image sensor component. 3. The camera of claim 1, further comprising: a heat-cured adhesive layer positioned between the curved surface of the substrate and the die, wherein the heat-cured adhesive layer provides a fixed attachment between the die and the substrate. 4. The camera of claim 1, wherein, the substrate further comprises: a second side having a flat surface for attachment of the substrate to an articulating component for articulating the lens to the image sensor. 5. The camera of claim 1, wherein the first side having a curved surface further comprises at least a portion of the substrate having a curved surface conforming to a focal radius of one of the one or more lenses for directing light to the image sensor component. 6. An image sensor device, the image sensor device comprising: a substrate, wherein the substrate comprises: a first side having a curved surface; a die comprising an image sensor component attached to the curved surface of the substrate, wherein at least a portion of the die comprising the image sensor component having a curved surface; and an adhesive layer positioned between the curved surface of the substrate and the die, wherein the adhesive layer provides a fixed attachment between the die and the substrate. 7. The image sensor device of claim 6, wherein, the at least a portion of the die comprising the image sensor component having a curved surface further comprises at least a portion of the die comprising the image sensor component having a curved surface conforming to a focal radius of a lens for depositing light on the image sensor component in a camera comprising the image sensor device. 8. The image sensor device of claim 6, further comprising the adhesive layer positioned between the curved surface of the substrate and the die, further comprises a heat-cured adhesive layer for providing a fixed alignment between the die and the lens. 9. The image sensor device of claim 6, further comprising the adhesive layer positioned between the curved surface of the substrate and the die further comprises a pressure-sensitive adhesive layer for providing a fixed alignment between the die and the lens. 10. The image sensor device of claim 6, further comprising the adhesive layer positioned between the curved surface of the substrate and the die, further comprises a light-cured adhesive layer for providing a fixed alignment between the die and the lens. 11. The image sensor device of claim 6, wherein, the first side having a curved surface further comprises at least a portion of the substrate having a curved surface conforming to a focal radius of a lens for depositing light on the image sensor component in a camera comprising the image sensor device. 12. A method for manufacturing an image sensor device, the method comprising: depositing an adhesive layer onto a substrate in a stage tool, wherein the substrate comprises a curved surface positioned in the stage tool to receive the adhesive layer; depositing a die onto the adhesive layer, wherein the die contains an image sensor; and applying pressure to the die using a bond tool, wherein the applying pressure to the die further comprises applying a pressure calculated to create a curvature of the die corresponding to a curvature of the curved surface of the substrate; and curing the adhesive, wherein the curing the adhesive further comprises delivering energy to the adhesive using the bond tool. 13. The method of claim 12, wherein the depositing the adhesive layer onto a substrate in the stage tool further comprises depositing the adhesive onto a surface area smaller than a surface area of the curved surface of the substrate, and applying pressure to the die using a bond tool after separation of the dice from the wafer. 14. The method of claim 12, wherein the applying pressure to the die further comprises applying the pressure using a bond tool having a cavity on a surface of the bond tool for prevention of contact with critical areas of the die. 15. The method of claim 12, further comprising, positioning a substrate in a stage tool, wherein the positioning a substrate in the stage tool further comprises positioning the substrate with a flat side of a substrate facing a complementary surface of the stage tool and a curved surface of the substrate facing an opening of the stage tool designed for receiving the die and bond tool. 16. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool is performed subsequent to initiation of the applying pressure calculated to create the curvature of the die corresponding to the curvature of the curved surface of the substrate. 17. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool is performed prior to the applying pressure calculated to create the curvature of the die corresponding to the curvature of the curved surface of the substrate. 18. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool further comprises delivering kinetic energy to a pressure-sensitive adhesive using a strike of the bond tool. 19. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool further comprises delivering thermal energy to the adhesive using a thermal resistor located within the bond tool. 20. The method of claim 12, wherein the delivering energy to the adhesive using the bond tool further comprises delivering light energy to the adhesive using a light source.
2,600
10,228
10,228
13,818,962
2,625
A tactile sensation providing apparatus, so as to reduce an influence by provision of a tactile sensation on detection of a position by a touch sensor, includes a touch sensor 11, a tactile sensation providing unit 12 disposed near the sensor 11 and configured to vibrate the sensor 11, a touch sensor control unit 20 configured to transmit a scanning signal to the sensor 11 and, by receiving the signal from the sensor 11, to detect the position of the contact to the sensor, signal lines 16, 18 configured to transmit the signal between the sensor 11 and the control unit 20, and a tactile sensation control unit 30 configured to, based on the position of the contact detected by the control unit 20, control the providing unit 12 to vibrate the sensor 11. The providing unit 12 is disposed avoiding overlapping with the lines 16, 18.
1. A tactile sensation providing apparatus comprising: a touch sensor; a tactile sensation providing unit disposed near the touch sensor and configured to vibrate the touch sensor; a touch sensor control unit configured to transmit a scanning signal to the touch sensor and, by receiving the scanning signal from the touch sensor, to detect a position of a contact to the touch sensor; a signal line configured to transmit the scanning signal between the touch sensor and the touch sensor control unit; and a tactile sensation control unit configured to, based on the position of the contact detected by the touch sensor control unit, control the tactile sensation providing unit to vibrate the touch sensor, wherein the tactile sensation providing unit is arranged avoiding overlapping with the signal line. 2. The tactile sensation providing apparatus according to claim 1, wherein the touch sensor includes a capacitive type touch sensor. 3. A tactile sensation providing apparatus comprising: a touch sensor; a tactile sensation providing unit configured to vibrate the touch sensor; and a tactile sensation control unit configured to control such that a period in which scanning is performed for detecting a position of a contact to the touch sensor and a period in which the tactile sensation providing unit vibrates the touch sensor do not overlap with each other. 4. The tactile sensation providing apparatus according to claim 3, wherein the touch sensor includes a capacitive type touch sensor. 5. The tactile sensation providing apparatus according to claim 3, wherein the tactile sensation control unit, during the period in which the touch senor is vibrated, applies a drive voltage such that the tactile sensation providing unit generates vibration.
A tactile sensation providing apparatus, so as to reduce an influence by provision of a tactile sensation on detection of a position by a touch sensor, includes a touch sensor 11, a tactile sensation providing unit 12 disposed near the sensor 11 and configured to vibrate the sensor 11, a touch sensor control unit 20 configured to transmit a scanning signal to the sensor 11 and, by receiving the signal from the sensor 11, to detect the position of the contact to the sensor, signal lines 16, 18 configured to transmit the signal between the sensor 11 and the control unit 20, and a tactile sensation control unit 30 configured to, based on the position of the contact detected by the control unit 20, control the providing unit 12 to vibrate the sensor 11. The providing unit 12 is disposed avoiding overlapping with the lines 16, 18.1. A tactile sensation providing apparatus comprising: a touch sensor; a tactile sensation providing unit disposed near the touch sensor and configured to vibrate the touch sensor; a touch sensor control unit configured to transmit a scanning signal to the touch sensor and, by receiving the scanning signal from the touch sensor, to detect a position of a contact to the touch sensor; a signal line configured to transmit the scanning signal between the touch sensor and the touch sensor control unit; and a tactile sensation control unit configured to, based on the position of the contact detected by the touch sensor control unit, control the tactile sensation providing unit to vibrate the touch sensor, wherein the tactile sensation providing unit is arranged avoiding overlapping with the signal line. 2. The tactile sensation providing apparatus according to claim 1, wherein the touch sensor includes a capacitive type touch sensor. 3. A tactile sensation providing apparatus comprising: a touch sensor; a tactile sensation providing unit configured to vibrate the touch sensor; and a tactile sensation control unit configured to control such that a period in which scanning is performed for detecting a position of a contact to the touch sensor and a period in which the tactile sensation providing unit vibrates the touch sensor do not overlap with each other. 4. The tactile sensation providing apparatus according to claim 3, wherein the touch sensor includes a capacitive type touch sensor. 5. The tactile sensation providing apparatus according to claim 3, wherein the tactile sensation control unit, during the period in which the touch senor is vibrated, applies a drive voltage such that the tactile sensation providing unit generates vibration.
2,600
10,229
10,229
15,293,798
2,668
A system includes an infrared camera, an infrared light source, and a processor. The processor is programmed to receive, from the infrared camera, an image of a hand illuminated using the infrared light source, and send the image to a remote server to identify a user corresponding to the hand according to a vein pattern of the hand. An image of a hand illuminated using an infrared light source is received from an infrared camera. Region-of-interest segmentation is performed on the image to generate a segmented image of consistent hand location and orientation. Feature extraction is performed on the segmented image to generate a feature-extracted vein image. Matching of the feature-extracted vein image is performed against a database of feature-extracted vein images to identify a user identity corresponding to the hand.
1. A system comprising: an infrared camera; an infrared light source; and a processor, programmed to receive, from the infrared camera, an image of a hand illuminated using the infrared light source, and send the image to a remote server to identify a user corresponding to the hand according to a vein pattern of the hand. 2. The system of claim 1, wherein the infrared camera includes an infrared filter for eliminating interference from light sources other than the infrared light source. 3. The system of claim 2, wherein the infrared filter is located between a lens and a complementary metal-oxide-semiconductor (CMOS) sensor of the infrared camera. 4. The system of claim 2, wherein the infrared filter is an 850 nanometer infrared cut filter. 5. The system of claim 1, wherein the processor is further programmed to access immunization records for the user. 6. The system of claim 1, wherein the infrared light source is an infrared flashlight having a plurality of illumination intensity settings. 7. The system of claim 1, wherein the infrared camera and processor are integrated components of a mobile device. 8. The system of claim 7, wherein the infrared light source is an integrated component of the mobile device. 9. A method comprising: receiving, from an infrared camera, an image of a hand illuminated using an infrared light source; performing region-of-interest segmentation on the image to generate a segmented image of consistent hand location and orientation; performing feature extraction of the segmented image to generate a feature-extracted vein image; and matching the feature-extracted vein image against a database of feature-extracted vein images to identify a user corresponding to the hand. 10. The method of claim 9, further comprising receiving the image, over a communication network, from a transceiver of a mobile device including the infrared camera. 11. The method of claim 9, further comprising: converting the segmented image into a grayscale image; employing a Multi-scale Gaussian Matched Filter to extract vein pattern lines from the segmented image; and performing binarization on the vein pattern lines to generate a binary image. 12. The method of claim 11, further comprising employing a de-noise algorithm for noise reduction of the binary image. 13. The method of claim 9, further comprising accessing a database to retrieve immunization records for the user corresponding to the hand. 14. The method of claim 9, further comprising applying an infrared filter to the infrared camera to eliminate interference from light sources other than the infrared light source. 15. The method of claim 9, further comprising calculating the feature-extracted using a thinning algorithm refining vein patterns to single-pixel lines. 16. A system comprising: a mobile device, including a processor and a memory, the mobile device programmed to execute instructions stored to the memory to: receive, from an infrared camera, an image of a hand illuminated using an infrared light source; perform region-of-interest segmentation on the image to generate a segmented image of consistent hand location and orientation; perform feature extraction of the segmented image to generate a feature-extracted vein image; and match the feature-extracted vein image against a database of feature-extracted vein images to identify a user corresponding to the hand. 17. The system of claim 16, wherein the infrared camera includes an infrared filter for eliminating interference from light sources other than the infrared light source, the infrared filter being an 850 nanometer infrared cut filter located between a lens and a complementary metal-oxide-semiconductor (CMOS) sensor of the infrared camera. 18. The system of claim 16, wherein the infrared light source is an infrared flashlight having a plurality of illumination intensity settings. 19. The system of claim 16, wherein the infrared camera is an integrated component of the mobile device. 20. The system of claim 16, wherein the infrared light source is an integrated component of the mobile device.
A system includes an infrared camera, an infrared light source, and a processor. The processor is programmed to receive, from the infrared camera, an image of a hand illuminated using the infrared light source, and send the image to a remote server to identify a user corresponding to the hand according to a vein pattern of the hand. An image of a hand illuminated using an infrared light source is received from an infrared camera. Region-of-interest segmentation is performed on the image to generate a segmented image of consistent hand location and orientation. Feature extraction is performed on the segmented image to generate a feature-extracted vein image. Matching of the feature-extracted vein image is performed against a database of feature-extracted vein images to identify a user identity corresponding to the hand.1. A system comprising: an infrared camera; an infrared light source; and a processor, programmed to receive, from the infrared camera, an image of a hand illuminated using the infrared light source, and send the image to a remote server to identify a user corresponding to the hand according to a vein pattern of the hand. 2. The system of claim 1, wherein the infrared camera includes an infrared filter for eliminating interference from light sources other than the infrared light source. 3. The system of claim 2, wherein the infrared filter is located between a lens and a complementary metal-oxide-semiconductor (CMOS) sensor of the infrared camera. 4. The system of claim 2, wherein the infrared filter is an 850 nanometer infrared cut filter. 5. The system of claim 1, wherein the processor is further programmed to access immunization records for the user. 6. The system of claim 1, wherein the infrared light source is an infrared flashlight having a plurality of illumination intensity settings. 7. The system of claim 1, wherein the infrared camera and processor are integrated components of a mobile device. 8. The system of claim 7, wherein the infrared light source is an integrated component of the mobile device. 9. A method comprising: receiving, from an infrared camera, an image of a hand illuminated using an infrared light source; performing region-of-interest segmentation on the image to generate a segmented image of consistent hand location and orientation; performing feature extraction of the segmented image to generate a feature-extracted vein image; and matching the feature-extracted vein image against a database of feature-extracted vein images to identify a user corresponding to the hand. 10. The method of claim 9, further comprising receiving the image, over a communication network, from a transceiver of a mobile device including the infrared camera. 11. The method of claim 9, further comprising: converting the segmented image into a grayscale image; employing a Multi-scale Gaussian Matched Filter to extract vein pattern lines from the segmented image; and performing binarization on the vein pattern lines to generate a binary image. 12. The method of claim 11, further comprising employing a de-noise algorithm for noise reduction of the binary image. 13. The method of claim 9, further comprising accessing a database to retrieve immunization records for the user corresponding to the hand. 14. The method of claim 9, further comprising applying an infrared filter to the infrared camera to eliminate interference from light sources other than the infrared light source. 15. The method of claim 9, further comprising calculating the feature-extracted using a thinning algorithm refining vein patterns to single-pixel lines. 16. A system comprising: a mobile device, including a processor and a memory, the mobile device programmed to execute instructions stored to the memory to: receive, from an infrared camera, an image of a hand illuminated using an infrared light source; perform region-of-interest segmentation on the image to generate a segmented image of consistent hand location and orientation; perform feature extraction of the segmented image to generate a feature-extracted vein image; and match the feature-extracted vein image against a database of feature-extracted vein images to identify a user corresponding to the hand. 17. The system of claim 16, wherein the infrared camera includes an infrared filter for eliminating interference from light sources other than the infrared light source, the infrared filter being an 850 nanometer infrared cut filter located between a lens and a complementary metal-oxide-semiconductor (CMOS) sensor of the infrared camera. 18. The system of claim 16, wherein the infrared light source is an infrared flashlight having a plurality of illumination intensity settings. 19. The system of claim 16, wherein the infrared camera is an integrated component of the mobile device. 20. The system of claim 16, wherein the infrared light source is an integrated component of the mobile device.
2,600
10,230
10,230
12,717,865
2,651
A system and method embodying the invention utilizes voice applications that are performed by voice applications agents resident on user local devices to deliver messages to the users. The voice applications can also be used to collect information from the users. Also, voice applications can be used to allow users to purchase goods and services. Voice applications for these purposes could be customized to take into account the individual characteristics of the users.
1. A method of communicating a message to a plurality of users who are connected to a distributed voice application execution system via a digital data connection, comprising: generating a voice application designed to deliver the message to the users; and causing voice applications agents resident on a plurality of different local devices top perform the generated voice application.
A system and method embodying the invention utilizes voice applications that are performed by voice applications agents resident on user local devices to deliver messages to the users. The voice applications can also be used to collect information from the users. Also, voice applications can be used to allow users to purchase goods and services. Voice applications for these purposes could be customized to take into account the individual characteristics of the users.1. A method of communicating a message to a plurality of users who are connected to a distributed voice application execution system via a digital data connection, comprising: generating a voice application designed to deliver the message to the users; and causing voice applications agents resident on a plurality of different local devices top perform the generated voice application.
2,600
10,231
10,231
15,500,564
2,627
A stylus having an applied force-sensitive tip-switch is disclosed. The stylus includes a tip-switch responsive to a variable applied force. Control circuitry determines whether the tip-switch has made contact with an object and, if so, the magnitude of the applied force resulting from the contact. The circuitry then encodes a signal that varies with the magnitude of the applied force and transmits the encoded signal to a computing device, enabling the computing device to indicate the two- or three-dimensional path of the tip-switch in the stylus on a computer screen. The circuitry also monitors a manual override switch that activates an encoded override signal for interpretation and use by the computing device while indicating the path of the tip switch on a computer screen.
1. A stylus for use with a computing device, comprising: a housing; a tip-switch operatively coupled to the housing, wherein the tip-switch is responsive to an applied force; control circuitry to detect the magnitude of a force applied to the tip-switch and to produce a signal; and an antenna to transmit a signal representing the magnitude of the force applied to the tip-switch to the computing device. 2. The stylus of claim 1, wherein the housing has an outer surface, at least a portion of which is retro-reflective. 3. The stylus of claim 1, further comprising an accelerometer. 4. The stylus of claim 1, wherein the control circuitry is operable to detect a threshold level of the magnitude of the force applied to the tip-switch. 5. The stylus of claim 1, further comprising a manual switch to override the tip-switch, wherein the stylus becomes operable when no force is applied to the tip-switch. 6. A force sensitive stylus for use with a computing device, comprising: a housing; a tip-switch operatively coupled to the housing, wherein the tip-switch is responsive to an applied force; control circuitry to detect the magnitude of a force applied to the tip-switch and to produce a signal, the value of the signal varying with the magnitude of the applied force; a radio-frequency antenna to transmit the signal representing the magnitude of the force applied to the tip-switch; and a manual switch to override the tip-switch, wherein the stylus becomes operable when no force is applied to the tip-switch. 7. The stylus of claim 6, wherein the housing has an outer surface at least a portion of which is retro-reflective. 8. The stylus of claim 7, further comprising an accelerometer, wherein the accelerometer refines position determinations of the stylus in at least two dimensions. 9. The stylus of claim 8, wherein the control circuitry is operable to detect threshold level of the magnitude of the force applied to the tip-switch. 10. The stylus of claim wherein the magnitude of the applied force ranges from approximately zero to approximately 400 grams. 11. A method for sensing the force applied to the tip of a stylus and providing a signal to a computing device representative of the magnitude of the force, comprising: providing a force-sensing stylus, comprising: a housing; a tip-switch operatively coupled to the housing, wherein the tip-switch is responsive to an applied force; control circuitry to detect the magnitude of a force applied to the tip-switch and to produce a signal, the value of the signal varying with the magnitude of the applied force; and a radio-frequency antenna to transmit the signal representing the magnitude of the force applied to the tip-switch; producing the signal varying with the magnitude of the applied force upon movement of the tip-switch with respect to the housing; and transmitting the signal to the computing device. 12. The method of claim 11, wherein the force-sensing stylus further comprises a manual switch to override the tip-switch, wherein the stylus becomes operable when no force is applied to the tip-switch. 13. The method of claim 11, wherein the housing has an outer surface, at least a portion of which is retro-reflective. 14. The method of claim 11, further comprising an accelerometer, the accelerometer to refine position determinations of the stylus in at least two dimensions. 15. The stylus of claim 11, wherein the control circuitry is operable to detect a threshold level of the magnitude of the force applied to the tip-switch.
A stylus having an applied force-sensitive tip-switch is disclosed. The stylus includes a tip-switch responsive to a variable applied force. Control circuitry determines whether the tip-switch has made contact with an object and, if so, the magnitude of the applied force resulting from the contact. The circuitry then encodes a signal that varies with the magnitude of the applied force and transmits the encoded signal to a computing device, enabling the computing device to indicate the two- or three-dimensional path of the tip-switch in the stylus on a computer screen. The circuitry also monitors a manual override switch that activates an encoded override signal for interpretation and use by the computing device while indicating the path of the tip switch on a computer screen.1. A stylus for use with a computing device, comprising: a housing; a tip-switch operatively coupled to the housing, wherein the tip-switch is responsive to an applied force; control circuitry to detect the magnitude of a force applied to the tip-switch and to produce a signal; and an antenna to transmit a signal representing the magnitude of the force applied to the tip-switch to the computing device. 2. The stylus of claim 1, wherein the housing has an outer surface, at least a portion of which is retro-reflective. 3. The stylus of claim 1, further comprising an accelerometer. 4. The stylus of claim 1, wherein the control circuitry is operable to detect a threshold level of the magnitude of the force applied to the tip-switch. 5. The stylus of claim 1, further comprising a manual switch to override the tip-switch, wherein the stylus becomes operable when no force is applied to the tip-switch. 6. A force sensitive stylus for use with a computing device, comprising: a housing; a tip-switch operatively coupled to the housing, wherein the tip-switch is responsive to an applied force; control circuitry to detect the magnitude of a force applied to the tip-switch and to produce a signal, the value of the signal varying with the magnitude of the applied force; a radio-frequency antenna to transmit the signal representing the magnitude of the force applied to the tip-switch; and a manual switch to override the tip-switch, wherein the stylus becomes operable when no force is applied to the tip-switch. 7. The stylus of claim 6, wherein the housing has an outer surface at least a portion of which is retro-reflective. 8. The stylus of claim 7, further comprising an accelerometer, wherein the accelerometer refines position determinations of the stylus in at least two dimensions. 9. The stylus of claim 8, wherein the control circuitry is operable to detect threshold level of the magnitude of the force applied to the tip-switch. 10. The stylus of claim wherein the magnitude of the applied force ranges from approximately zero to approximately 400 grams. 11. A method for sensing the force applied to the tip of a stylus and providing a signal to a computing device representative of the magnitude of the force, comprising: providing a force-sensing stylus, comprising: a housing; a tip-switch operatively coupled to the housing, wherein the tip-switch is responsive to an applied force; control circuitry to detect the magnitude of a force applied to the tip-switch and to produce a signal, the value of the signal varying with the magnitude of the applied force; and a radio-frequency antenna to transmit the signal representing the magnitude of the force applied to the tip-switch; producing the signal varying with the magnitude of the applied force upon movement of the tip-switch with respect to the housing; and transmitting the signal to the computing device. 12. The method of claim 11, wherein the force-sensing stylus further comprises a manual switch to override the tip-switch, wherein the stylus becomes operable when no force is applied to the tip-switch. 13. The method of claim 11, wherein the housing has an outer surface, at least a portion of which is retro-reflective. 14. The method of claim 11, further comprising an accelerometer, the accelerometer to refine position determinations of the stylus in at least two dimensions. 15. The stylus of claim 11, wherein the control circuitry is operable to detect a threshold level of the magnitude of the force applied to the tip-switch.
2,600
10,232
10,232
15,476,043
2,696
An approach is provided that receives, at a smartglasses device, a set of image data from a digital camera that is external to the smartglasses device. The approach further displays an image at the smartglasses based on the set of image data.
1. A method comprising: receiving, at a smartglasses device, a set of image data from a digital camera that is external to the smartglasses device; displaying an image at the smartglasses based on the set of image data; capturing, at the smartglasses device, a current view of a background; receiving the set of image data wherein the set of image data is one or more objects appearing in front of the background; generating the image by highlighting an area where the one or more objects appear; and displaying the image at the smartglasses where the background appears with the highlighting and without the one or more objects appearing on the image. 2. (canceled) 3. The method of claim 1 further comprising: receiving the set of image data wherein the set of image data is one or more objects; and combining a second set of image data captured at a second digital camera included in the smartglasses with the set of image data, wherein the combining creates the image that is displayed at the smartglasses. 4. The method of claim 1 further comprising: receiving the set of image data wherein the set of image data is a digital image; and displaying the digital image in a window appearing at a transparent display included in the smartglasses. 5. The method of claim 4 wherein the window appears as a picture-in-picture occupying a portion of the transparent display, and wherein a view through the transparent display occupies a remainder of the transparent display. 6. The method of claim 5 wherein the smartglasses are oriented in a different direction as the digital camera. 7. The method of claim 4 wherein the window occupies substantially an entire display area of the transparent display. 8. A smartglasses device comprising: one or more processors; a memory accessible by at least one of the processors; a transparent display capable of displaying images, wherein the display is accessible by at least one of the processors; a receiver, accessible by at least one of the processors, capable of receiving data from an external device; and a set of instructions stored in the memory and executable by at least one of the processors to: receive, at the receiver, a set of image data from a camera that is external to the smartglasses device; display, at the transparent display, an image that is based on the set of image data; capture, at the smartglasses device, a current view of a background; receive the set of image data wherein the set of image data is one or more objects appearing in front of the background; generate the image by highlighting an area where the one or more objects appear; and display the image at the smartglasses where the background appears with the highlighting and without the one or more objects appearing on the image. 9. (canceled) 10. The smartglasses device of claim 8 further comprising instructions stored in the memory and executable by at least one of the processors to: receive the set of image data wherein the set of image data is one or more objects; and combine a second set of image data captured at a second digital camera included in the smartglasses with the set of image data, wherein the combining creates the image that is displayed at the smartglasses. 11. The smartglasses device of claim 8 further comprising instructions stored in the memory and executable by at least one of the processors to: receive the set of image data wherein the set of image data is a digital image; and display the digital image in a window appearing at a transparent display included in the smartglasses. 12. The smartglasses device of claim 11 wherein the window appears as a picture-in-picture occupying a portion of the transparent display, and wherein a view through the transparent display occupies a remainder of the transparent display. 13. The smartglasses device of claim 12 wherein the smartglasses are oriented in a different direction as the digital camera. 14. The smartglasses device of claim 11 wherein the window occupies substantially an entire display area of the transparent display. 15. A computer program product comprising: a computer readable storage medium comprising a set of computer instructions, the computer instructions effective to: receive a set of image data from a camera that is external to the smartglasses device; display, at a transparent display, an image that is based on the set of image data; capture a current view of a background; receive the set of image data wherein the set of image data is one or more objects appearing in front of the background; generate the image by highlighting an area where the one or more objects appear; and display the image at the smartglasses where the background appears with the highlighting and without the one or more objects appearing on the image. 16. (canceled) 17. The computer program product of claim 15 wherein the actions further comprise: receive the set of image data wherein the set of image data is one or more objects; and combine a second set of image data captured at a second digital camera included in the smartglasses with the set of image data, wherein the combining creates the image that is displayed at the smartglasses. 18. The computer program product of claim 15 wherein the actions further comprise: receive the set of image data wherein the set of image data is a digital image; and display the digital image in a window appearing at a transparent display included in the smartglasses. 19. The computer program product of claim 18 wherein the window appears as a picture-in-picture occupying a portion of the transparent display, and wherein a view through the transparent display occupies a remainder of the transparent display. 20. The computer program product of claim 19 wherein the smartglasses are oriented in a different direction as the digital camera.
An approach is provided that receives, at a smartglasses device, a set of image data from a digital camera that is external to the smartglasses device. The approach further displays an image at the smartglasses based on the set of image data.1. A method comprising: receiving, at a smartglasses device, a set of image data from a digital camera that is external to the smartglasses device; displaying an image at the smartglasses based on the set of image data; capturing, at the smartglasses device, a current view of a background; receiving the set of image data wherein the set of image data is one or more objects appearing in front of the background; generating the image by highlighting an area where the one or more objects appear; and displaying the image at the smartglasses where the background appears with the highlighting and without the one or more objects appearing on the image. 2. (canceled) 3. The method of claim 1 further comprising: receiving the set of image data wherein the set of image data is one or more objects; and combining a second set of image data captured at a second digital camera included in the smartglasses with the set of image data, wherein the combining creates the image that is displayed at the smartglasses. 4. The method of claim 1 further comprising: receiving the set of image data wherein the set of image data is a digital image; and displaying the digital image in a window appearing at a transparent display included in the smartglasses. 5. The method of claim 4 wherein the window appears as a picture-in-picture occupying a portion of the transparent display, and wherein a view through the transparent display occupies a remainder of the transparent display. 6. The method of claim 5 wherein the smartglasses are oriented in a different direction as the digital camera. 7. The method of claim 4 wherein the window occupies substantially an entire display area of the transparent display. 8. A smartglasses device comprising: one or more processors; a memory accessible by at least one of the processors; a transparent display capable of displaying images, wherein the display is accessible by at least one of the processors; a receiver, accessible by at least one of the processors, capable of receiving data from an external device; and a set of instructions stored in the memory and executable by at least one of the processors to: receive, at the receiver, a set of image data from a camera that is external to the smartglasses device; display, at the transparent display, an image that is based on the set of image data; capture, at the smartglasses device, a current view of a background; receive the set of image data wherein the set of image data is one or more objects appearing in front of the background; generate the image by highlighting an area where the one or more objects appear; and display the image at the smartglasses where the background appears with the highlighting and without the one or more objects appearing on the image. 9. (canceled) 10. The smartglasses device of claim 8 further comprising instructions stored in the memory and executable by at least one of the processors to: receive the set of image data wherein the set of image data is one or more objects; and combine a second set of image data captured at a second digital camera included in the smartglasses with the set of image data, wherein the combining creates the image that is displayed at the smartglasses. 11. The smartglasses device of claim 8 further comprising instructions stored in the memory and executable by at least one of the processors to: receive the set of image data wherein the set of image data is a digital image; and display the digital image in a window appearing at a transparent display included in the smartglasses. 12. The smartglasses device of claim 11 wherein the window appears as a picture-in-picture occupying a portion of the transparent display, and wherein a view through the transparent display occupies a remainder of the transparent display. 13. The smartglasses device of claim 12 wherein the smartglasses are oriented in a different direction as the digital camera. 14. The smartglasses device of claim 11 wherein the window occupies substantially an entire display area of the transparent display. 15. A computer program product comprising: a computer readable storage medium comprising a set of computer instructions, the computer instructions effective to: receive a set of image data from a camera that is external to the smartglasses device; display, at a transparent display, an image that is based on the set of image data; capture a current view of a background; receive the set of image data wherein the set of image data is one or more objects appearing in front of the background; generate the image by highlighting an area where the one or more objects appear; and display the image at the smartglasses where the background appears with the highlighting and without the one or more objects appearing on the image. 16. (canceled) 17. The computer program product of claim 15 wherein the actions further comprise: receive the set of image data wherein the set of image data is one or more objects; and combine a second set of image data captured at a second digital camera included in the smartglasses with the set of image data, wherein the combining creates the image that is displayed at the smartglasses. 18. The computer program product of claim 15 wherein the actions further comprise: receive the set of image data wherein the set of image data is a digital image; and display the digital image in a window appearing at a transparent display included in the smartglasses. 19. The computer program product of claim 18 wherein the window appears as a picture-in-picture occupying a portion of the transparent display, and wherein a view through the transparent display occupies a remainder of the transparent display. 20. The computer program product of claim 19 wherein the smartglasses are oriented in a different direction as the digital camera.
2,600
10,233
10,233
15,332,508
2,612
A method and system for managing graphics load balancing strategies are disclosed. The method comprises using a plurality of rendering servers to render a multitude of graphics frames for a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers; selecting one of the load balancing strategies; and using the selected one of the load balancing strategies to balance the workloads on the rendering servers. One or more defined metrics are monitored; and in response to a defined changed in said one or more defined metrics, another one of the load balancing strategies is selected and used to balance the workloads on the rendering servers. In one embodiment, the load balancing policy can be changed in real-time during the course of an application session.
1. A method of managing graphics load balancing strategies, comprising: using a plurality of rendering servers to render concurrently a multitude of graphics frames for a display area on a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers, each of the load balancing strategies being a respective one technique for partitioning the display area into a plurality of smaller regions and assigning said plurality of smaller regions of the display area to the rendering servers; and dynamically switching among the plurality of the load balancing strategies over a period of time to re-balance the workloads on the rendering servers, including using a system controller to implement the load balancing strategies, and using a manager to select different ones of the load balancing strategies for implementation at different times in said period of time and for communicating the selected load balancing strategies to the system controller, including the manager performing an execution loop to manage selection of the load balancing strategies, said execution loop including: analyzing performance statistics and additional information provided by the system controller to determine when a different one of the load balancing strategies is needed, changing from one of the load balancing strategies to another one of the load balancing strategies when a different one of the load balancing strategies is needed, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller when said different one of the load balancing strategies is executed. 2. The method according to claim 1, the execution loop further including the manager repeating the analyzing performance statistics and additional information, changing from one of the load balancing strategies to another one of the load balancing strategies, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller for a specified time. 3. The method according to claim 2, wherein the specified time is until a current application ceases. 4. The method according to claim 1, wherein the manager obtains the performance statistics for each server from the system controller, and provides an initial display tile partitions and assignments for each of the rendering servers, and repartitions, resizes and reassigns the partitions based on the performance feedback and the one of the load balancing strategies in use. 5. The method according to claim 1, wherein the system controller accepts new load balancing strategies from a user, and passes information to the manager, said information including display configuration performance statistics for each server and user-defined load balancing policy information. 6. The method according to claim 1, wherein the selecting another one of the load balancing strategies includes changing from said one of the load balancing strategies to said another one of the load balancing strategies in real time. 7. The method according to claim 1, wherein data are sent between the rendering servers, and the performance statistics includes the time it takes to send data between the rendering servers. 8. The method according to claim 1, wherein said performance statistics includes application usage patterns, the size and nature of the data being rendered, the display configuration, and the amount and type of rendering resources that are available. 9. The method according to claim 1, wherein said performance statistics includes one or more user defined metrics. 10. The method according to claim 1, wherein the identifying includes a user providing one or more of the load balancing strategies. 11. A data processing system for managing graphics load balancing strategies, comprising: a plurality of rendering servers to render a multitude of graphics frames for a display area on a display device, wherein each of the rendering servers has an associated workload; a rendering servers controller for identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers, for using one of the load balancing strategies to balance the workloads on the rendering servers, and for monitoring one or more defined metrics, each of the load balancing strategies being a respective one technique for partitioning the display area into a plurality of smaller regions and assigning said plurality of smaller regions of the display region to the rendering servers; and a load balancing policies manager for dynamically switching among the plurality of the load balancing strategies over a period of time to re-balance the workloads on the rendering servers including acting, in response to a defined change in said one or more defined metrics, to select another one of the load balancing strategies, including the manager performing an execution loop to manage selection of the load balancing strategies, said execution loop including analyzing performance statistics and additional information provided by the system controller to determine when a different one of the load balancing strategies is needed, changing from one of the load balancing strategies to another one of the load balancing strategies when a different one of the load balancing strategies is needed, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller when said different one of the load balancing strategies is executed. 12. The system according to claim 11, wherein the execution loop further includes repeating the analyzing performance statistics and additional information, changing from one of the load balancing strategies to another one of the load balancing strategies, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller for a specified time. 13. The system according to claim 12, wherein the specified time is until a current application ceases. 14. The system according to claim 11, wherein the manager obtains the performance statistics for each server from the system controller, and provides an initial display tile partitions and assignments for each of the rendering servers, and repartitions, resizes and reassigns the partitions based on the performance feedback and the one of the load balancing strategies in use. 15. The method according to claim 11, wherein the system controller accepts new load balancing strategies from a user, and passes information to the manager, said information including display configuration performance statistics for each server and user-defined load balancing policy information. 16. An article of manufacture comprising at least one tangible computer usable hardware device having computer readable program code logic tangibly embodied therein to execute a machine instruction in one or more processing units for managing graphics load balancing strategies, said computer readable program code logic, when executing, performing the following steps: using a plurality of rendering servers to render a multitude of graphics frames for a display area on a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers, each of the load balancing strategies being a respective one technique for partitioning the display area into a plurality of smaller regions and assigning said plurality of smaller regions of the display region to the rendering servers; and dynamically switching among the plurality of the load balancing strategies over a period of time to re-balance the workloads on the rendering servers, including using a system controller to implement the load balancing strategies, and using a manager to select different ones of the load balancing strategies for implementation at different times in said period of time and for communicating the selected load balancing strategies to the system controller, including the manager performing an execution loop to manage selection of the load balancing strategies, said execution loop including: analyzing performance statistics and additional information provided by the system controller to determine when a different one of the load balancing strategies is needed, changing from one of the load balancing strategies to another one of the load balancing strategies when a different one of the load balancing strategies is needed, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller when said different one of the load balancing strategies is executed. 17. The computer program product according to claim 16, wherein the execution loop further includes repeating the analyzing performance statistics and additional information, changing from one of the load balancing strategies to another one of the load balancing strategies, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller for a specified time. 18. The computer program product according to claim 17, wherein the specified time is until a current application ceases. 19. The computer program product according to claim 16, wherein the manager obtains the performance statistics for each server from the system controller, and provides an initial display tile partitions and assignments for each of the rendering servers, and repartitions, resizes and reassigns the partitions based on the performance feedback and the one of the load balancing strategies in use. 20. The computer program product according to claim 16, wherein: the using a plurality of rendering servers includes said plurality of rendering servers rendering the multitude of graphics frames at a defined rate; the changing from one of the load balancing strategies to another of the load balancing strategies includes changing from the one of the load balancing strategies to the another of the load balancing strategies without affecting said defined rate; and said defined rate is a constant rate.
A method and system for managing graphics load balancing strategies are disclosed. The method comprises using a plurality of rendering servers to render a multitude of graphics frames for a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers; selecting one of the load balancing strategies; and using the selected one of the load balancing strategies to balance the workloads on the rendering servers. One or more defined metrics are monitored; and in response to a defined changed in said one or more defined metrics, another one of the load balancing strategies is selected and used to balance the workloads on the rendering servers. In one embodiment, the load balancing policy can be changed in real-time during the course of an application session.1. A method of managing graphics load balancing strategies, comprising: using a plurality of rendering servers to render concurrently a multitude of graphics frames for a display area on a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers, each of the load balancing strategies being a respective one technique for partitioning the display area into a plurality of smaller regions and assigning said plurality of smaller regions of the display area to the rendering servers; and dynamically switching among the plurality of the load balancing strategies over a period of time to re-balance the workloads on the rendering servers, including using a system controller to implement the load balancing strategies, and using a manager to select different ones of the load balancing strategies for implementation at different times in said period of time and for communicating the selected load balancing strategies to the system controller, including the manager performing an execution loop to manage selection of the load balancing strategies, said execution loop including: analyzing performance statistics and additional information provided by the system controller to determine when a different one of the load balancing strategies is needed, changing from one of the load balancing strategies to another one of the load balancing strategies when a different one of the load balancing strategies is needed, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller when said different one of the load balancing strategies is executed. 2. The method according to claim 1, the execution loop further including the manager repeating the analyzing performance statistics and additional information, changing from one of the load balancing strategies to another one of the load balancing strategies, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller for a specified time. 3. The method according to claim 2, wherein the specified time is until a current application ceases. 4. The method according to claim 1, wherein the manager obtains the performance statistics for each server from the system controller, and provides an initial display tile partitions and assignments for each of the rendering servers, and repartitions, resizes and reassigns the partitions based on the performance feedback and the one of the load balancing strategies in use. 5. The method according to claim 1, wherein the system controller accepts new load balancing strategies from a user, and passes information to the manager, said information including display configuration performance statistics for each server and user-defined load balancing policy information. 6. The method according to claim 1, wherein the selecting another one of the load balancing strategies includes changing from said one of the load balancing strategies to said another one of the load balancing strategies in real time. 7. The method according to claim 1, wherein data are sent between the rendering servers, and the performance statistics includes the time it takes to send data between the rendering servers. 8. The method according to claim 1, wherein said performance statistics includes application usage patterns, the size and nature of the data being rendered, the display configuration, and the amount and type of rendering resources that are available. 9. The method according to claim 1, wherein said performance statistics includes one or more user defined metrics. 10. The method according to claim 1, wherein the identifying includes a user providing one or more of the load balancing strategies. 11. A data processing system for managing graphics load balancing strategies, comprising: a plurality of rendering servers to render a multitude of graphics frames for a display area on a display device, wherein each of the rendering servers has an associated workload; a rendering servers controller for identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers, for using one of the load balancing strategies to balance the workloads on the rendering servers, and for monitoring one or more defined metrics, each of the load balancing strategies being a respective one technique for partitioning the display area into a plurality of smaller regions and assigning said plurality of smaller regions of the display region to the rendering servers; and a load balancing policies manager for dynamically switching among the plurality of the load balancing strategies over a period of time to re-balance the workloads on the rendering servers including acting, in response to a defined change in said one or more defined metrics, to select another one of the load balancing strategies, including the manager performing an execution loop to manage selection of the load balancing strategies, said execution loop including analyzing performance statistics and additional information provided by the system controller to determine when a different one of the load balancing strategies is needed, changing from one of the load balancing strategies to another one of the load balancing strategies when a different one of the load balancing strategies is needed, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller when said different one of the load balancing strategies is executed. 12. The system according to claim 11, wherein the execution loop further includes repeating the analyzing performance statistics and additional information, changing from one of the load balancing strategies to another one of the load balancing strategies, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller for a specified time. 13. The system according to claim 12, wherein the specified time is until a current application ceases. 14. The system according to claim 11, wherein the manager obtains the performance statistics for each server from the system controller, and provides an initial display tile partitions and assignments for each of the rendering servers, and repartitions, resizes and reassigns the partitions based on the performance feedback and the one of the load balancing strategies in use. 15. The method according to claim 11, wherein the system controller accepts new load balancing strategies from a user, and passes information to the manager, said information including display configuration performance statistics for each server and user-defined load balancing policy information. 16. An article of manufacture comprising at least one tangible computer usable hardware device having computer readable program code logic tangibly embodied therein to execute a machine instruction in one or more processing units for managing graphics load balancing strategies, said computer readable program code logic, when executing, performing the following steps: using a plurality of rendering servers to render a multitude of graphics frames for a display area on a display device, wherein each of the rendering servers has an associated workload; identifying a plurality of load balancing strategies for balancing the workloads on the rendering servers, each of the load balancing strategies being a respective one technique for partitioning the display area into a plurality of smaller regions and assigning said plurality of smaller regions of the display region to the rendering servers; and dynamically switching among the plurality of the load balancing strategies over a period of time to re-balance the workloads on the rendering servers, including using a system controller to implement the load balancing strategies, and using a manager to select different ones of the load balancing strategies for implementation at different times in said period of time and for communicating the selected load balancing strategies to the system controller, including the manager performing an execution loop to manage selection of the load balancing strategies, said execution loop including: analyzing performance statistics and additional information provided by the system controller to determine when a different one of the load balancing strategies is needed, changing from one of the load balancing strategies to another one of the load balancing strategies when a different one of the load balancing strategies is needed, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller when said different one of the load balancing strategies is executed. 17. The computer program product according to claim 16, wherein the execution loop further includes repeating the analyzing performance statistics and additional information, changing from one of the load balancing strategies to another one of the load balancing strategies, communicating said different one of the load balancing strategies to the system controller, and obtaining new performance statistics from the system controller for a specified time. 18. The computer program product according to claim 17, wherein the specified time is until a current application ceases. 19. The computer program product according to claim 16, wherein the manager obtains the performance statistics for each server from the system controller, and provides an initial display tile partitions and assignments for each of the rendering servers, and repartitions, resizes and reassigns the partitions based on the performance feedback and the one of the load balancing strategies in use. 20. The computer program product according to claim 16, wherein: the using a plurality of rendering servers includes said plurality of rendering servers rendering the multitude of graphics frames at a defined rate; the changing from one of the load balancing strategies to another of the load balancing strategies includes changing from the one of the load balancing strategies to the another of the load balancing strategies without affecting said defined rate; and said defined rate is a constant rate.
2,600
10,234
10,234
15,640,339
2,616
Methods and systems for rendering augmented reality display data to locations of a physical presentation environment based on a presentation configuration are provided. A physical presentation environment configuration may be accessed that includes locations of a physical presentation environment for mapping augmented reality display data. The augmented reality display data may include a plurality of augmented reality objects that are rendered for display. Presentation attributes of the augmented reality display data may be used in conjunction with the presentation configuration for mapping and rendering the augmented reality display data. The rendered augmented reality display data may be dynamically interactive, and may be generated based on previous presentation configurations, mapping preferences, mapping limitations, and/or other factors.
1. A system for rendering augmented reality display data based on physical presentation environments, the system comprising: a presentation environment configuration component configured to: access a physical presentation environment configuration, wherein the physical presentation environment configuration comprises locations of a physical presentation environment for mapping augmented reality display data to the physical presentation environment; an augmented reality display data attribute component configured to: determine presentation attributes of the augmented reality display data, wherein the presentation attributes comprise features of the augmented reality display data, and wherein the augmented reality display data comprises a plurality of augmented reality objects to be rendered for display; a data mapping component configured to: generate a presentation configuration for the augmented reality display data based on the physical presentation environment configuration and the presentation attributes, wherein the presentation configuration comprises a mapping of the augmented reality display data to the physical presentation environment; and a data rendering component configured to: render the augmented reality display data based on the presentation configuration. 2. The system of claim 1, further comprising a physical environment scanning component configured to: generate a scanned image of the physical presentation environment, and automatically perform a location recognition operation on the scanned image to determine the locations of the physical presentation environment for mapping the augmented reality display data. 3. The system of claim 2, wherein the location recognition operation determines the locations of the physical presentation environment based on one or more location mapping factors, the one or more location mapping factors comprising at least one of: a location of one or more planar surfaces in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more objects in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more avoidance areas in the physical presentation environment for restricting the mapping of one or more of the plurality of augmented reality objects; and a location of a user in the physical presentation environment. 4. The system of claim 1, wherein the augmented reality display data attribute component is further configured to determine the features of the presentation attributes based on at least one of: a retrieved data property file identifying the features; and a retrieved layout property of the features. 5. The system of claim 1, wherein the data mapping component is further configured to generate the presentation configuration based on at least one of: a previously generated presentation configuration for the physical presentation environment; and a previously generated presentation configuration for a different physical presentation environment. 6. The system of claim 1, wherein the data mapping component is further configured to generate a modified presentation configuration for the augmented reality display data based on a received input indicating at least one of: a mapping preference for the augmented reality display data; and a mapping limitation for the augmented reality display data. 7. The system of claim 1, further comprising a virtual control screen generation component configured to generate a virtual control screen for mapping to the physical presentation environment with the augmented reality display data, wherein the virtual control screen is configured to modify the presentation configuration and the rendering of the augmented reality display data in response to dynamic interaction with the virtual control screen. 8. The system of claim 1, wherein rendering the augmented reality display data based on the presentation configuration comprises rendering each of the plurality of augmented reality objects to a respective one of the locations of the physical presentation environment. 9. The system of claim 1, wherein rendering the augmented reality display data comprises rendering at least one of the plurality of augmented reality objects to an object in the physical presentation environment, and wherein the at least one augmented reality object rendered to the object in the physical presentation environment is modifiable based on dynamic interaction with the object. 10. A computer-implemented method for rendering augmented reality display data based on physical presentation environments, the method comprising: accessing a physical presentation environment configuration, wherein the physical presentation environment configuration comprises locations of a physical presentation environment for mapping augmented reality display data to the physical presentation environment; determining presentation attributes of the augmented reality display data, wherein the presentation attributes comprise features of the augmented reality display data, and wherein the augmented reality display data comprises a plurality of augmented reality objects to be rendered for display; generating a presentation configuration for the augmented reality display data based on the physical presentation environment configuration and the presentation attributes, wherein the presentation configuration comprises a mapping of the augmented reality display data to the physical presentation environment; and rendering the augmented reality display data based on the presentation configuration. 11. The computer-implemented method of claim 10, wherein the presentation configuration is generated based on a previously generated presentation configuration for a different physical presentation environment in order to maintain at least one mapping characteristic of the previously generated presentation configuration, and wherein the previously generated presentation configuration is generated based on the different physical presentation environment configuration and the presentation attributes of the augmented reality display data. 12. The computer-implemented method of claim 10, wherein the presentation configuration is generated based on a previously generated presentation configuration for the physical presentation environment, wherein the presentation configuration at least one of: maintains a mapping characteristic of the previously generated presentation configuration, maintains a received mapping preference for the previously generated presentation configuration, maintains a received mapping limitation for the previously generated presentation configuration, and modifies a mapping characteristic of the previously generated presentation configuration. 13. The computer-implemented method of claim 10, wherein accessing a physical presentation environment configuration comprises: receiving a scanned image of the physical presentation environment that includes the locations; performing a location recognition operation on the scanned image to determine the locations based on one or more location mapping factors, the one or more location mapping factors comprising at least one of: a location of one or more planar surfaces in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more objects in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more avoidance areas in the physical presentation environment for restricting the mapping of one or more of the plurality of augmented reality objects; and a location of a user in the physical presentation environment. 14. The computer-implemented method of claim 10, further comprising: receiving an input for the mapping of the augmented reality display data, the input comprising at least one of: a mapping preference, and a mapping limitation; generating a modified presentation configuration for the augmented reality display data based on the received input; and rendering the augmented reality display data based on the modified presentation configuration. 15. One or more computer storage media having computer-executable instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform a method for rendering augmented reality display data based on physical presentation environments, the method comprising: accessing a first presentation configuration corresponding to a first physical presentation environment configuration, wherein a physical presentation environment configuration comprises locations of a physical presentation environment for mapping augmented reality display data to a physical presentation environment; accessing a second physical presentation environment configuration, wherein the second physical presentation environment configuration comprises locations of a second physical presentation environment; determining presentation attributes of the augmented reality display data, wherein the presentation attributes comprise features of the augmented reality display data, and wherein the augmented reality display data comprises a plurality of augmented reality objects to be rendered for display; generating a second presentation configuration for the augmented reality display data based on the second physical presentation environment configuration and the presentation attributes, wherein the second presentation configuration comprises a mapping of the augmented reality display data to the second physical presentation environment based at least in part on the first presentation configuration; and rendering the augmented reality display data based on the second presentation configuration. 16. The one or more computer storage media of claim 15, wherein the rendered augmented reality display data includes a dynamically interactive augmented reality object, and wherein the rendered augmented reality display data is modifiable based on dynamic interaction with at least one of: the dynamically interactive augmented reality object; and an object in the physical presentation environment associated with the dynamically interactive augmented reality object. 17. The one or more computer storage media of claim 16, wherein the method further comprises: receiving a dynamic input that modifies the dynamically interactive augmented reality object, wherein the dynamic input initiates at least one of: a change in location of the dynamically interactive augmented reality object, a change in orientation of the dynamically interactive augmented reality object, a change in orientation of the dynamically interactive augmented reality object, a modification of content presented with the dynamically interactive augmented reality object, a reorganization of content presented with the dynamically interactive augmented reality object, and a modification of at least some of the rendered augmented reality display data; and modifying the rendered augmented reality display data based on the received dynamic input. 18. The one or more computer storage media of claim 16, wherein the dynamically interactive augmented reality object is rendered to remain at a fixed location and orientation relative to a user. 19. The one or more computer storage media of claim 16, wherein the dynamic interaction comprises at least one of: a detected user eye movement guiding the dynamic interaction with the dynamically interactive augmented reality object; a detected user body movement guiding the dynamic interaction with the dynamically interactive augmented reality object; a detected input from an additional input device; an assignment of the dynamically interactive augmented reality object to a location in the physical presentation environment; and an assignment of one or more of the plurality of augmented reality objects to one or more of the locations in the physical presentation environment configuration based on at least one of: dynamic placement of the plurality of augmented reality objects on the dynamically interactive augmented reality object, and dynamic placement of the plurality of augmented reality objects on the physical presentation environment. 20. The one or more computer storage media of claim 16, wherein the dynamic interaction provides a haptic feedback.
Methods and systems for rendering augmented reality display data to locations of a physical presentation environment based on a presentation configuration are provided. A physical presentation environment configuration may be accessed that includes locations of a physical presentation environment for mapping augmented reality display data. The augmented reality display data may include a plurality of augmented reality objects that are rendered for display. Presentation attributes of the augmented reality display data may be used in conjunction with the presentation configuration for mapping and rendering the augmented reality display data. The rendered augmented reality display data may be dynamically interactive, and may be generated based on previous presentation configurations, mapping preferences, mapping limitations, and/or other factors.1. A system for rendering augmented reality display data based on physical presentation environments, the system comprising: a presentation environment configuration component configured to: access a physical presentation environment configuration, wherein the physical presentation environment configuration comprises locations of a physical presentation environment for mapping augmented reality display data to the physical presentation environment; an augmented reality display data attribute component configured to: determine presentation attributes of the augmented reality display data, wherein the presentation attributes comprise features of the augmented reality display data, and wherein the augmented reality display data comprises a plurality of augmented reality objects to be rendered for display; a data mapping component configured to: generate a presentation configuration for the augmented reality display data based on the physical presentation environment configuration and the presentation attributes, wherein the presentation configuration comprises a mapping of the augmented reality display data to the physical presentation environment; and a data rendering component configured to: render the augmented reality display data based on the presentation configuration. 2. The system of claim 1, further comprising a physical environment scanning component configured to: generate a scanned image of the physical presentation environment, and automatically perform a location recognition operation on the scanned image to determine the locations of the physical presentation environment for mapping the augmented reality display data. 3. The system of claim 2, wherein the location recognition operation determines the locations of the physical presentation environment based on one or more location mapping factors, the one or more location mapping factors comprising at least one of: a location of one or more planar surfaces in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more objects in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more avoidance areas in the physical presentation environment for restricting the mapping of one or more of the plurality of augmented reality objects; and a location of a user in the physical presentation environment. 4. The system of claim 1, wherein the augmented reality display data attribute component is further configured to determine the features of the presentation attributes based on at least one of: a retrieved data property file identifying the features; and a retrieved layout property of the features. 5. The system of claim 1, wherein the data mapping component is further configured to generate the presentation configuration based on at least one of: a previously generated presentation configuration for the physical presentation environment; and a previously generated presentation configuration for a different physical presentation environment. 6. The system of claim 1, wherein the data mapping component is further configured to generate a modified presentation configuration for the augmented reality display data based on a received input indicating at least one of: a mapping preference for the augmented reality display data; and a mapping limitation for the augmented reality display data. 7. The system of claim 1, further comprising a virtual control screen generation component configured to generate a virtual control screen for mapping to the physical presentation environment with the augmented reality display data, wherein the virtual control screen is configured to modify the presentation configuration and the rendering of the augmented reality display data in response to dynamic interaction with the virtual control screen. 8. The system of claim 1, wherein rendering the augmented reality display data based on the presentation configuration comprises rendering each of the plurality of augmented reality objects to a respective one of the locations of the physical presentation environment. 9. The system of claim 1, wherein rendering the augmented reality display data comprises rendering at least one of the plurality of augmented reality objects to an object in the physical presentation environment, and wherein the at least one augmented reality object rendered to the object in the physical presentation environment is modifiable based on dynamic interaction with the object. 10. A computer-implemented method for rendering augmented reality display data based on physical presentation environments, the method comprising: accessing a physical presentation environment configuration, wherein the physical presentation environment configuration comprises locations of a physical presentation environment for mapping augmented reality display data to the physical presentation environment; determining presentation attributes of the augmented reality display data, wherein the presentation attributes comprise features of the augmented reality display data, and wherein the augmented reality display data comprises a plurality of augmented reality objects to be rendered for display; generating a presentation configuration for the augmented reality display data based on the physical presentation environment configuration and the presentation attributes, wherein the presentation configuration comprises a mapping of the augmented reality display data to the physical presentation environment; and rendering the augmented reality display data based on the presentation configuration. 11. The computer-implemented method of claim 10, wherein the presentation configuration is generated based on a previously generated presentation configuration for a different physical presentation environment in order to maintain at least one mapping characteristic of the previously generated presentation configuration, and wherein the previously generated presentation configuration is generated based on the different physical presentation environment configuration and the presentation attributes of the augmented reality display data. 12. The computer-implemented method of claim 10, wherein the presentation configuration is generated based on a previously generated presentation configuration for the physical presentation environment, wherein the presentation configuration at least one of: maintains a mapping characteristic of the previously generated presentation configuration, maintains a received mapping preference for the previously generated presentation configuration, maintains a received mapping limitation for the previously generated presentation configuration, and modifies a mapping characteristic of the previously generated presentation configuration. 13. The computer-implemented method of claim 10, wherein accessing a physical presentation environment configuration comprises: receiving a scanned image of the physical presentation environment that includes the locations; performing a location recognition operation on the scanned image to determine the locations based on one or more location mapping factors, the one or more location mapping factors comprising at least one of: a location of one or more planar surfaces in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more objects in the physical presentation environment for mapping one or more of the plurality of augmented reality objects; a location of one or more avoidance areas in the physical presentation environment for restricting the mapping of one or more of the plurality of augmented reality objects; and a location of a user in the physical presentation environment. 14. The computer-implemented method of claim 10, further comprising: receiving an input for the mapping of the augmented reality display data, the input comprising at least one of: a mapping preference, and a mapping limitation; generating a modified presentation configuration for the augmented reality display data based on the received input; and rendering the augmented reality display data based on the modified presentation configuration. 15. One or more computer storage media having computer-executable instructions embodied thereon that, when executed by one or more processors, cause the one or more processors to perform a method for rendering augmented reality display data based on physical presentation environments, the method comprising: accessing a first presentation configuration corresponding to a first physical presentation environment configuration, wherein a physical presentation environment configuration comprises locations of a physical presentation environment for mapping augmented reality display data to a physical presentation environment; accessing a second physical presentation environment configuration, wherein the second physical presentation environment configuration comprises locations of a second physical presentation environment; determining presentation attributes of the augmented reality display data, wherein the presentation attributes comprise features of the augmented reality display data, and wherein the augmented reality display data comprises a plurality of augmented reality objects to be rendered for display; generating a second presentation configuration for the augmented reality display data based on the second physical presentation environment configuration and the presentation attributes, wherein the second presentation configuration comprises a mapping of the augmented reality display data to the second physical presentation environment based at least in part on the first presentation configuration; and rendering the augmented reality display data based on the second presentation configuration. 16. The one or more computer storage media of claim 15, wherein the rendered augmented reality display data includes a dynamically interactive augmented reality object, and wherein the rendered augmented reality display data is modifiable based on dynamic interaction with at least one of: the dynamically interactive augmented reality object; and an object in the physical presentation environment associated with the dynamically interactive augmented reality object. 17. The one or more computer storage media of claim 16, wherein the method further comprises: receiving a dynamic input that modifies the dynamically interactive augmented reality object, wherein the dynamic input initiates at least one of: a change in location of the dynamically interactive augmented reality object, a change in orientation of the dynamically interactive augmented reality object, a change in orientation of the dynamically interactive augmented reality object, a modification of content presented with the dynamically interactive augmented reality object, a reorganization of content presented with the dynamically interactive augmented reality object, and a modification of at least some of the rendered augmented reality display data; and modifying the rendered augmented reality display data based on the received dynamic input. 18. The one or more computer storage media of claim 16, wherein the dynamically interactive augmented reality object is rendered to remain at a fixed location and orientation relative to a user. 19. The one or more computer storage media of claim 16, wherein the dynamic interaction comprises at least one of: a detected user eye movement guiding the dynamic interaction with the dynamically interactive augmented reality object; a detected user body movement guiding the dynamic interaction with the dynamically interactive augmented reality object; a detected input from an additional input device; an assignment of the dynamically interactive augmented reality object to a location in the physical presentation environment; and an assignment of one or more of the plurality of augmented reality objects to one or more of the locations in the physical presentation environment configuration based on at least one of: dynamic placement of the plurality of augmented reality objects on the dynamically interactive augmented reality object, and dynamic placement of the plurality of augmented reality objects on the physical presentation environment. 20. The one or more computer storage media of claim 16, wherein the dynamic interaction provides a haptic feedback.
2,600
10,235
10,235
14,053,237
2,641
Providing vehicle services and control over a cellular data network is disclosed. Vehicle maintenance and operation information may be provided to a vehicle. An instruction to remotely control locks or a horn of the vehicle based on security levels and other criteria is also disclosed.
1. A management system comprising: circuitry configured to send, via a cellular data network using internet protocol (IP) to a vehicle, vehicle maintenance and operation information; the circuitry is further configured to send, via the cellular data network using IP, an instruction to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 2. The management system of claim 1, wherein the management system is capable of contacting law enforcement. 3. The management system of claim 1 further comprising: circuitry configured to receive, from the vehicle, a notification of an unauthorized operator accessing the vehicle; and wherein the accessing of the vehicle indicates that the vehicle was stolen. 4. The management system of claim 1 wherein a connection to the internet over the cellular data network is provided to the vehicle. 5. A vehicle computer configured in a vehicle, the vehicle computer comprising: the vehicle computer configured to receive, via a cellular data network using internet protocol (IP), vehicle maintenance and operation information; the vehicle computer is further configured to receive an instruction, via the cellular data network using IP, to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 6. The vehicle computer of claim 5, wherein the vehicle computer is capable of contacting law enforcement. 7. The vehicle computer of claim 5, wherein unauthorized access of the vehicle indicates that the vehicle was stolen. 8. The vehicle computer of claim 5, further comprising: the vehicle computer is configured to connect to the internet over the cellular data network. 9. A method performed by a management system, the method comprising: sending, by the management system to a vehicle via a cellular data network using internet protocol (IP), vehicle maintenance and operation information; sending, by the management system via the cellular data network using IP, an instruction to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 10. The method of claim 9, wherein the management system is capable of contacting law enforcement. 11. The method of claim 9 further comprising: receiving, by the management system from the vehicle, a notification of an unauthorized operator accessing the vehicle; and wherein the accessing of the vehicle indicates that the vehicle was stolen. 12. The method of claim 9 wherein a connection to the internet over the cellular data network is provided to the vehicle. 13. A method performed by a vehicle computer configured in a vehicle, the method comprising: receiving, by the vehicle computer via a cellular data network using internet protocol (IP), vehicle maintenance and operation information; receiving, by the vehicle computer via the cellular data network using IP, an instruction to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 14. The method of claim 13, wherein the vehicle computer is capable of contacting law enforcement. 15. The method of claim 13, wherein unauthorized access of the vehicle indicates that the vehicle was stolen. 16. The method of claim 13, further comprising: connecting, by the vehicle computer, to the internet over the cellular data network.
Providing vehicle services and control over a cellular data network is disclosed. Vehicle maintenance and operation information may be provided to a vehicle. An instruction to remotely control locks or a horn of the vehicle based on security levels and other criteria is also disclosed.1. A management system comprising: circuitry configured to send, via a cellular data network using internet protocol (IP) to a vehicle, vehicle maintenance and operation information; the circuitry is further configured to send, via the cellular data network using IP, an instruction to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 2. The management system of claim 1, wherein the management system is capable of contacting law enforcement. 3. The management system of claim 1 further comprising: circuitry configured to receive, from the vehicle, a notification of an unauthorized operator accessing the vehicle; and wherein the accessing of the vehicle indicates that the vehicle was stolen. 4. The management system of claim 1 wherein a connection to the internet over the cellular data network is provided to the vehicle. 5. A vehicle computer configured in a vehicle, the vehicle computer comprising: the vehicle computer configured to receive, via a cellular data network using internet protocol (IP), vehicle maintenance and operation information; the vehicle computer is further configured to receive an instruction, via the cellular data network using IP, to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 6. The vehicle computer of claim 5, wherein the vehicle computer is capable of contacting law enforcement. 7. The vehicle computer of claim 5, wherein unauthorized access of the vehicle indicates that the vehicle was stolen. 8. The vehicle computer of claim 5, further comprising: the vehicle computer is configured to connect to the internet over the cellular data network. 9. A method performed by a management system, the method comprising: sending, by the management system to a vehicle via a cellular data network using internet protocol (IP), vehicle maintenance and operation information; sending, by the management system via the cellular data network using IP, an instruction to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 10. The method of claim 9, wherein the management system is capable of contacting law enforcement. 11. The method of claim 9 further comprising: receiving, by the management system from the vehicle, a notification of an unauthorized operator accessing the vehicle; and wherein the accessing of the vehicle indicates that the vehicle was stolen. 12. The method of claim 9 wherein a connection to the internet over the cellular data network is provided to the vehicle. 13. A method performed by a vehicle computer configured in a vehicle, the method comprising: receiving, by the vehicle computer via a cellular data network using internet protocol (IP), vehicle maintenance and operation information; receiving, by the vehicle computer via the cellular data network using IP, an instruction to remotely control locks or a horn of the vehicle; and wherein the instruction is provided based on a plurality of security levels and based on a request for permission of control of the vehicle. 14. The method of claim 13, wherein the vehicle computer is capable of contacting law enforcement. 15. The method of claim 13, wherein unauthorized access of the vehicle indicates that the vehicle was stolen. 16. The method of claim 13, further comprising: connecting, by the vehicle computer, to the internet over the cellular data network.
2,600
10,236
10,236
15,265,007
2,699
An exemplary method of controlling a vehicle from a remote location includes receiving a first signal from a vehicle-paired device and initiating a vehicle function in response to the first signal. The first signal is sent by the vehicle-paired device in response to a second signal sent from a secondary triggering device. An exemplary range extending system includes a vehicle-paired device configured to transmit a first signal to a vehicle, and a secondary triggering device that transmits a second signal to the vehicle-paired device to initiate a transmission of the first signal from the vehicle-paired device.
1. A method of controlling a vehicle from a remote location, comprising: receiving, at a vehicle, a first signal from a vehicle-paired device within a wireless communication range of the vehicle and initiating a vehicle function in response to the first signal, the first signal sent by the vehicle-paired device in response to a second signal sent from a secondary triggering device that is outside the wireless communication range. 2. (canceled) 3. The method of claim 1, wherein the vehicle-paired device is in a first location remote from the vehicle and the secondary triggering device is in a second location remote from the vehicle. 4. The method of claim 3, wherein the first and second locations are within an interior of a building, wherein the second location is further from the vehicle than the first location. 5. The method of claim 1, wherein the vehicle can receive the first signal and then initiate the vehicle function when the vehicle-paired device is within the communication range, the vehicle cannot receive the first signal and then initiate the vehicle function when the vehicle-paired device is outside the communication range. 6. (canceled) 7. The method of claim 1, wherein the second signal is communicated from the secondary triggering device to the vehicle-paired device via a local area network. 8. The method of claim 1, wherein initiating the vehicle function comprise stopping a charging procedure. 9. The method of claim 1, wherein the secondary triggering device is part of a home control system, and the vehicle-paired device is a non-keyfob device mounted in a fixed position within a home having the home control system. 10. A range extending system, comprising: a vehicle-paired device configured to transmit a first signal to a vehicle when within a communication range of the vehicle; and a secondary triggering device that transmits a second signal from outside the communication range to the vehicle-paired device to initiate a transmission of the first signal from the vehicle-paired device. 11. (canceled) 12. The range extending system of claim 10, wherein the vehicle-paired device is a non-key fob device. 13. The range extending system of claim 10, wherein the vehicle-paired device is mounted in a fixed position outside the vehicle. 14. The range extending system of claim 10, wherein the vehicle-paired device is in a first position within the communication range of the vehicle, and the secondary triggering device is in a second position that is outside the communication range, wherein the first and second locations are within an interior of a building. 15. The range extending system of claim 10, wherein the secondary triggering device is remote from the vehicle-paired device. 16. The range extending system of claim 10, further comprising the vehicle, wherein the vehicle is configured to initiate a vehicle function in response to the first signal. 17. The range extending system of claim 16, wherein the vehicle function comprises stopping a charge station charging a traction battery of the vehicle, wherein the vehicle-paired device and the charge station are powered by a common power circuit. 18. The range extending system of claim 10, wherein the secondary triggering device is part of a home control system. 19. The method of claim 1, wherein the secondary triggering device and the vehicle are unpaired. 20. The method of claim 8, wherein the vehicle-paired device is configured such that a user can interact directly with the vehicle-pared device to initiate a stopping of the charging procedure. 21. The method of claim 8, wherein a user interacts with the secondary triggering device to send the second signal, and further comprising, after stopping the charging procedure, the user uses a powered device, wherein the powered device and a charging station that charges the vehicle during the charging procedure both draw power from the same circuit. 22. The range extending system of claim 10, wherein the vehicle-paired device is configured such that the vehicle-paired device can transmit a third signal to the vehicle in response to a user interacting directly with the vehicle-paired device, the vehicle stopping a charge of a traction battery in response to the third signal. 23. The range extending system of claim 10, wherein the vehicle-paired device includes a first radio communication system to communicate with the vehicle and a separate, second radio communication system to communicate with the with the secondary triggering device.
An exemplary method of controlling a vehicle from a remote location includes receiving a first signal from a vehicle-paired device and initiating a vehicle function in response to the first signal. The first signal is sent by the vehicle-paired device in response to a second signal sent from a secondary triggering device. An exemplary range extending system includes a vehicle-paired device configured to transmit a first signal to a vehicle, and a secondary triggering device that transmits a second signal to the vehicle-paired device to initiate a transmission of the first signal from the vehicle-paired device.1. A method of controlling a vehicle from a remote location, comprising: receiving, at a vehicle, a first signal from a vehicle-paired device within a wireless communication range of the vehicle and initiating a vehicle function in response to the first signal, the first signal sent by the vehicle-paired device in response to a second signal sent from a secondary triggering device that is outside the wireless communication range. 2. (canceled) 3. The method of claim 1, wherein the vehicle-paired device is in a first location remote from the vehicle and the secondary triggering device is in a second location remote from the vehicle. 4. The method of claim 3, wherein the first and second locations are within an interior of a building, wherein the second location is further from the vehicle than the first location. 5. The method of claim 1, wherein the vehicle can receive the first signal and then initiate the vehicle function when the vehicle-paired device is within the communication range, the vehicle cannot receive the first signal and then initiate the vehicle function when the vehicle-paired device is outside the communication range. 6. (canceled) 7. The method of claim 1, wherein the second signal is communicated from the secondary triggering device to the vehicle-paired device via a local area network. 8. The method of claim 1, wherein initiating the vehicle function comprise stopping a charging procedure. 9. The method of claim 1, wherein the secondary triggering device is part of a home control system, and the vehicle-paired device is a non-keyfob device mounted in a fixed position within a home having the home control system. 10. A range extending system, comprising: a vehicle-paired device configured to transmit a first signal to a vehicle when within a communication range of the vehicle; and a secondary triggering device that transmits a second signal from outside the communication range to the vehicle-paired device to initiate a transmission of the first signal from the vehicle-paired device. 11. (canceled) 12. The range extending system of claim 10, wherein the vehicle-paired device is a non-key fob device. 13. The range extending system of claim 10, wherein the vehicle-paired device is mounted in a fixed position outside the vehicle. 14. The range extending system of claim 10, wherein the vehicle-paired device is in a first position within the communication range of the vehicle, and the secondary triggering device is in a second position that is outside the communication range, wherein the first and second locations are within an interior of a building. 15. The range extending system of claim 10, wherein the secondary triggering device is remote from the vehicle-paired device. 16. The range extending system of claim 10, further comprising the vehicle, wherein the vehicle is configured to initiate a vehicle function in response to the first signal. 17. The range extending system of claim 16, wherein the vehicle function comprises stopping a charge station charging a traction battery of the vehicle, wherein the vehicle-paired device and the charge station are powered by a common power circuit. 18. The range extending system of claim 10, wherein the secondary triggering device is part of a home control system. 19. The method of claim 1, wherein the secondary triggering device and the vehicle are unpaired. 20. The method of claim 8, wherein the vehicle-paired device is configured such that a user can interact directly with the vehicle-pared device to initiate a stopping of the charging procedure. 21. The method of claim 8, wherein a user interacts with the secondary triggering device to send the second signal, and further comprising, after stopping the charging procedure, the user uses a powered device, wherein the powered device and a charging station that charges the vehicle during the charging procedure both draw power from the same circuit. 22. The range extending system of claim 10, wherein the vehicle-paired device is configured such that the vehicle-paired device can transmit a third signal to the vehicle in response to a user interacting directly with the vehicle-paired device, the vehicle stopping a charge of a traction battery in response to the third signal. 23. The range extending system of claim 10, wherein the vehicle-paired device includes a first radio communication system to communicate with the vehicle and a separate, second radio communication system to communicate with the with the secondary triggering device.
2,600
10,237
10,237
13,249,258
2,626
Techniques involving selective modification of keyboard presentation and functionality. A commanding mode is selectively activated on a virtual keyboard. Activating the commanding mode attributes commands to respective individual keys of the virtual keyboard. Also in response to the commanding mode, indicia suggestive of the command is presented on those individual keys to which the commands were attributed. The commands can be executed in an application in response to selection of the respective individual keys when in commanding mode.
1. A computer-implemented method comprising: activating a commanding mode on a virtual keyboard, which attributes commands to respective individual keys of the virtual keyboard; presenting indicia on the individual keys to which commands are attributed in response to the activation of the commanding mode; and enabling execution of the commands in a processor-implemented application in response to selection of their respective individual keys when in commanding mode. 2. The computer-implemented method of claim 1, wherein enabling execution of the commands comprises initiating delivery of a series of keystroke actions for the respective command to a keyboard handler module in response to the selection of the respective individual key when in commanding mode. 3. The computer-implemented method of claim 1, wherein presenting indicia on the individual keys to which commands are attributed comprises presenting indicia in a language to which the virtual keyboard is configured. 4. The computer-implemented method of claim 1, wherein presenting indicia on the individual keys to which commands are attributed comprises presenting indicia in a language of an operating system on which the processor-implemented application executes. 5. The computer-implemented method of claim 1, wherein activating the commanding mode to attribute commands to respective individual keys comprises changing a default function attributed to the individual keys to the respective commands in response to activating the commanding mode. 6. The computer-implemented method of claim 5, further comprising reverting the individual keys back to their default function after any of the commands is initiated in response to its respective individual key being selected while in commanding mode. 7. The computer-implemented method of claim 5, further comprising reverting the individual keys back to their default function in response to deactivation of the commanding mode. 8. The computer-implemented method of claim 1, further comprising activating the commanding mode by selecting at least one predetermined key, and further comprising deactivating the commanding mode by again selecting the at least one predetermined key. 9. The computer-implemented method of claim 1, wherein the processor-implemented application provides the commands and corresponding indicia. 10. The computer-implemented method of claim 1, further comprising activating the commanding mode with a smart trigger that recognizes certain user input as a trigger to activate the commanding mode. 11. An apparatus comprising: a touch-based keyboard comprising a plurality of visually presented keys, wherein at least one of the keys is configured as a modifier key, and wherein at least one of the other keys is configured as a standard key to provide a first function when selected; and a processor configured to recognize selection of the modifier key to enter a commanding mode, and in response to the commanding mode to present command indicia on the standard key, and to reconfigure the standard key to provide a second function identifiable by the command indicia when selected. 12. The apparatus of claim 11, wherein the processor is configured to provide the second function by sending a command in response to selection of the reconfigured standard key when in the commanding mode. 13. The apparatus of claim 12, further comprising a data retention device configured to store a series of keystroke actions for the command, and to send the series of keystroke actions in response to the reconfigured standard key being selected. 14. The apparatus of claim 12, further comprising a data retention device configured to store at least the command indicia as provided by an application to which the command is provided, and wherein the processor is configured to present the command indicia provided by the application when in commanding mode. 15. The apparatus of claim 12, wherein the processor is configured to present the command indicia as at least text that identifies the command that is sent in response to selection of the reconfigured standard key when in the commanding mode. 16. The apparatus of claim 12, wherein the processor is configured to present the command indicia as at least an image that identifies the command that is sent in response to selection of the reconfigured standard key when in the commanding mode. 17. Computer-readable media having instructions stored thereon which are executable by a computing system for performing functions comprising: presenting a touch keyboard; providing a selectable modifier key on the touch keyboard configured to enable and disable a commanding mode; recognizing that the commanding mode has been enabled by selection of the modifier key, and in response, dynamically presenting one or more command identifiers on one or more respective keys of the touch keyboard; recognizing that one of the keys having the command identifiers presented thereon has been selected, and in response, providing a series of keystroke actions to a keyboard stack to carry out a command identified by the command identifier of the selected one of the keys; and disabling the commanding mode to return the touch keyboard to its state prior to enabling of the commanding mode. 18. The computer-readable media as in claim 17, wherein the instructions for dynamically presenting one or more command identifiers on one or more respective keys comprise instructions for dynamically presenting the one or more command identifiers in a language installed on the touch keyboard. 19. The computer-readable media as in claim 17, wherein the instructions for dynamically presenting one or more command identifiers on one or more respective keys comprise instructions for dynamically presenting the one or more command identifiers in a language of an operating system running an application utilizing the touch keyboard. 20. The computer-readable media as in claim 17, wherein the instructions for providing a selectable modifier key on the touch keyboard comprise instructions for providing a CTRL key that is configured to enable and disable the commanding mode.
Techniques involving selective modification of keyboard presentation and functionality. A commanding mode is selectively activated on a virtual keyboard. Activating the commanding mode attributes commands to respective individual keys of the virtual keyboard. Also in response to the commanding mode, indicia suggestive of the command is presented on those individual keys to which the commands were attributed. The commands can be executed in an application in response to selection of the respective individual keys when in commanding mode.1. A computer-implemented method comprising: activating a commanding mode on a virtual keyboard, which attributes commands to respective individual keys of the virtual keyboard; presenting indicia on the individual keys to which commands are attributed in response to the activation of the commanding mode; and enabling execution of the commands in a processor-implemented application in response to selection of their respective individual keys when in commanding mode. 2. The computer-implemented method of claim 1, wherein enabling execution of the commands comprises initiating delivery of a series of keystroke actions for the respective command to a keyboard handler module in response to the selection of the respective individual key when in commanding mode. 3. The computer-implemented method of claim 1, wherein presenting indicia on the individual keys to which commands are attributed comprises presenting indicia in a language to which the virtual keyboard is configured. 4. The computer-implemented method of claim 1, wherein presenting indicia on the individual keys to which commands are attributed comprises presenting indicia in a language of an operating system on which the processor-implemented application executes. 5. The computer-implemented method of claim 1, wherein activating the commanding mode to attribute commands to respective individual keys comprises changing a default function attributed to the individual keys to the respective commands in response to activating the commanding mode. 6. The computer-implemented method of claim 5, further comprising reverting the individual keys back to their default function after any of the commands is initiated in response to its respective individual key being selected while in commanding mode. 7. The computer-implemented method of claim 5, further comprising reverting the individual keys back to their default function in response to deactivation of the commanding mode. 8. The computer-implemented method of claim 1, further comprising activating the commanding mode by selecting at least one predetermined key, and further comprising deactivating the commanding mode by again selecting the at least one predetermined key. 9. The computer-implemented method of claim 1, wherein the processor-implemented application provides the commands and corresponding indicia. 10. The computer-implemented method of claim 1, further comprising activating the commanding mode with a smart trigger that recognizes certain user input as a trigger to activate the commanding mode. 11. An apparatus comprising: a touch-based keyboard comprising a plurality of visually presented keys, wherein at least one of the keys is configured as a modifier key, and wherein at least one of the other keys is configured as a standard key to provide a first function when selected; and a processor configured to recognize selection of the modifier key to enter a commanding mode, and in response to the commanding mode to present command indicia on the standard key, and to reconfigure the standard key to provide a second function identifiable by the command indicia when selected. 12. The apparatus of claim 11, wherein the processor is configured to provide the second function by sending a command in response to selection of the reconfigured standard key when in the commanding mode. 13. The apparatus of claim 12, further comprising a data retention device configured to store a series of keystroke actions for the command, and to send the series of keystroke actions in response to the reconfigured standard key being selected. 14. The apparatus of claim 12, further comprising a data retention device configured to store at least the command indicia as provided by an application to which the command is provided, and wherein the processor is configured to present the command indicia provided by the application when in commanding mode. 15. The apparatus of claim 12, wherein the processor is configured to present the command indicia as at least text that identifies the command that is sent in response to selection of the reconfigured standard key when in the commanding mode. 16. The apparatus of claim 12, wherein the processor is configured to present the command indicia as at least an image that identifies the command that is sent in response to selection of the reconfigured standard key when in the commanding mode. 17. Computer-readable media having instructions stored thereon which are executable by a computing system for performing functions comprising: presenting a touch keyboard; providing a selectable modifier key on the touch keyboard configured to enable and disable a commanding mode; recognizing that the commanding mode has been enabled by selection of the modifier key, and in response, dynamically presenting one or more command identifiers on one or more respective keys of the touch keyboard; recognizing that one of the keys having the command identifiers presented thereon has been selected, and in response, providing a series of keystroke actions to a keyboard stack to carry out a command identified by the command identifier of the selected one of the keys; and disabling the commanding mode to return the touch keyboard to its state prior to enabling of the commanding mode. 18. The computer-readable media as in claim 17, wherein the instructions for dynamically presenting one or more command identifiers on one or more respective keys comprise instructions for dynamically presenting the one or more command identifiers in a language installed on the touch keyboard. 19. The computer-readable media as in claim 17, wherein the instructions for dynamically presenting one or more command identifiers on one or more respective keys comprise instructions for dynamically presenting the one or more command identifiers in a language of an operating system running an application utilizing the touch keyboard. 20. The computer-readable media as in claim 17, wherein the instructions for providing a selectable modifier key on the touch keyboard comprise instructions for providing a CTRL key that is configured to enable and disable the commanding mode.
2,600
10,238
10,238
15,941,566
2,658
A spectrum filler for filling non-coded residual sub-vectors of a transform coded audio signal includes a sub-vector compressor configured to compress actually coded residual sub-vectors. A sub-vector rejecter is configured to reject compressed residual sub-vectors that do not fulfill a predetermined sparseness criterion. A sub-vector collector is configured to concatenate the remaining compressed residual sub-vectors to form a first virtual codebook. A coefficient combiner is configured to combine pairs of coefficients of the first virtual codebook to form a second virtual codebook. A sub-vector filler is configured to fill non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook, and to fill non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook.
1. An apparatus for filling non-coded residual sub-vectors of a transform coded audio signal, the apparatus comprising a processor and associated memory configured to: compress coded residual sub-vectors; reject compressed residual sub-vectors that do not fulfill a predetermined criterion; concatenate the remaining compressed residual sub-vectors to form a first virtual codebook; combine pairs of coefficients of the first virtual codebook to form a second virtual codebook; and fill non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook, and fill non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook; wherein the processor and associated memory are further configured to compress components {circumflex over (X)} (k) of the coded residual sub-vectors in accordance with: Y  ( k ) = { 1   if   X ^  ( k ) > 0  0   if   X ^  ( k ) = 0 - 1   if   X ^  ( k ) < 0   or   Y  ( k ) = { 1   if   X ^  ( k ) > T  0   if - T  ≤ X ^  ( k ) ≤ T  - 1   if   X ^  ( k ) < - T where Y(k) are the components of the compressed residual sub-vectors and T is a small positive number that controls the amount of compression. 2. The apparatus according to claim 1, wherein the apparatus is configured to reject compressed residual sub-vectors having less than a predetermined percentage of non-zero components. 3. The apparatus according to claim 1, wherein compressed residual sub-vectors that do not fulfill the criterion: ∑ k = 1 M    Y  ( k )  ≥ 2 , where the sub-vector dimension M is 8, are rejected. 4. The apparatus according to claim 1, wherein the apparatus is configured to combine pairs of coefficients Y (k) of the first virtual codebook (VC1) in accordance with: Z  ( k ) = { sign   ( Y  ( k ) ) × (  Y  ( k )  +  Y  ( N - k )  ) if   Y  ( k ) ≠ 0 Y  ( N - k ) if   Y  ( k ) = 0  k = 0   …   N - 1 where N is the size of the first virtual codebook and Z(k) are the components of the second virtual codebook. 5. The apparatus according to claim 1, wherein the apparatus is further configured to adjust the energy of filled non-coded residual sub-vectors to obtain a perceptual attenuation. 6. An audio decoder comprising the apparatus according to claim 1. 7. A user equipment (UE) comprising the audio decoder according to claim 6. 8. A method for filling non-coded residual sub-vectors of a transform coded audio signal, the method comprising: compressing coded residual sub-vectors; rejecting compressed residual sub-vectors that do not fulfill a predetermined criterion; concatenating the remaining compressed residual sub-vectors to form a first virtual codebook; combining pairs of coefficients of the first virtual codebook to form a second virtual codebook; filling non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook; and filling non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook; wherein components {circumflex over (X)} (k) of the coded residual sub-vectors are compressed in accordance with: Y  ( k ) = { 1   if   X ^  ( k ) > 0  0   if   X ^  ( k ) = 0 - 1   if   X ^  ( k ) < 0   or   Y  ( k ) = { 1   if   X ^  ( k ) > T  0   if - T  ≤ X ^  ( k ) ≤ T   - 1   if   X ^  ( k ) < - T , where Y(k) are the components of the compressed residual sub-vectors and T is a small positive number that controls the amount of compression. 9. The method according to claim 8, wherein rejecting compressed residual sub-vectors that do not fulfill the predetermined criterion comprises rejecting compressed residual sub-vectors having less than a predetermined percentage of non-zero components. 10. The method according to claim 8, wherein compressed residual sub-vectors that do not fulfill the criterion: ∑ k = 1 M    Y  ( k )  ≥ 2 , where the sub-vector dimension M is 8, are rejected. 11. The method according to claim 8, wherein combining pairs of coefficients of the first virtual codebook to form the second virtual codebook comprises combining pairs of coefficients Y (k) of the first virtual codebook (VC1) in accordance with: Z  ( k ) = { sign   ( Y  ( k ) ) × (  Y  ( k )  +  Y  ( N - k )  ) if   Y  ( k ) ≠ 0 Y  ( N - k ) if   Y  ( k ) = 0  k = 0   …   N - 1 where N is the size of the first virtual codebook and Z(k) are the components of the second virtual codebook. 12. The method according to claim 8, wherein the method further includes adjusting the energy of filled non-coded residual sub-vectors to obtain a perceptual attenuation. 13. A non-transitory computer-readable medium storing a computer program comprising program instructions that when executed on a processor cause the processor to fill non-coded residual sub-vectors of a transform coded audio signal, the computer program including program instructions causing the processor to: compress coded residual sub-vectors; reject compressed residual sub-vectors that do not fulfill a predetermined criterion; concatenate the remaining compressed residual sub-vectors to form a first virtual codebook; combine pairs of coefficients of the first virtual codebook to form a second virtual codebook; and fill non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook, and fill non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook; wherein compressing coded residual sub-vectors comprises compressing components {circumflex over (X)}(k) of the coded residual sub-vectors in accordance with: Y  ( k ) = { 1   if   X ^  ( k ) > 0  0   if   X ^  ( k ) = 0 - 1   if   X ^  ( k ) < 0   or   Y  ( k ) = { 1   if   X ^  ( k ) > T  0   if - T  ≤ X ^  ( k ) ≤ T    - 1   if   X ^  ( k ) < - T , where Y(k) are the components of the compressed residual sub-vectors and T is a small positive number that controls the amount of compression.
A spectrum filler for filling non-coded residual sub-vectors of a transform coded audio signal includes a sub-vector compressor configured to compress actually coded residual sub-vectors. A sub-vector rejecter is configured to reject compressed residual sub-vectors that do not fulfill a predetermined sparseness criterion. A sub-vector collector is configured to concatenate the remaining compressed residual sub-vectors to form a first virtual codebook. A coefficient combiner is configured to combine pairs of coefficients of the first virtual codebook to form a second virtual codebook. A sub-vector filler is configured to fill non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook, and to fill non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook.1. An apparatus for filling non-coded residual sub-vectors of a transform coded audio signal, the apparatus comprising a processor and associated memory configured to: compress coded residual sub-vectors; reject compressed residual sub-vectors that do not fulfill a predetermined criterion; concatenate the remaining compressed residual sub-vectors to form a first virtual codebook; combine pairs of coefficients of the first virtual codebook to form a second virtual codebook; and fill non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook, and fill non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook; wherein the processor and associated memory are further configured to compress components {circumflex over (X)} (k) of the coded residual sub-vectors in accordance with: Y  ( k ) = { 1   if   X ^  ( k ) > 0  0   if   X ^  ( k ) = 0 - 1   if   X ^  ( k ) < 0   or   Y  ( k ) = { 1   if   X ^  ( k ) > T  0   if - T  ≤ X ^  ( k ) ≤ T  - 1   if   X ^  ( k ) < - T where Y(k) are the components of the compressed residual sub-vectors and T is a small positive number that controls the amount of compression. 2. The apparatus according to claim 1, wherein the apparatus is configured to reject compressed residual sub-vectors having less than a predetermined percentage of non-zero components. 3. The apparatus according to claim 1, wherein compressed residual sub-vectors that do not fulfill the criterion: ∑ k = 1 M    Y  ( k )  ≥ 2 , where the sub-vector dimension M is 8, are rejected. 4. The apparatus according to claim 1, wherein the apparatus is configured to combine pairs of coefficients Y (k) of the first virtual codebook (VC1) in accordance with: Z  ( k ) = { sign   ( Y  ( k ) ) × (  Y  ( k )  +  Y  ( N - k )  ) if   Y  ( k ) ≠ 0 Y  ( N - k ) if   Y  ( k ) = 0  k = 0   …   N - 1 where N is the size of the first virtual codebook and Z(k) are the components of the second virtual codebook. 5. The apparatus according to claim 1, wherein the apparatus is further configured to adjust the energy of filled non-coded residual sub-vectors to obtain a perceptual attenuation. 6. An audio decoder comprising the apparatus according to claim 1. 7. A user equipment (UE) comprising the audio decoder according to claim 6. 8. A method for filling non-coded residual sub-vectors of a transform coded audio signal, the method comprising: compressing coded residual sub-vectors; rejecting compressed residual sub-vectors that do not fulfill a predetermined criterion; concatenating the remaining compressed residual sub-vectors to form a first virtual codebook; combining pairs of coefficients of the first virtual codebook to form a second virtual codebook; filling non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook; and filling non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook; wherein components {circumflex over (X)} (k) of the coded residual sub-vectors are compressed in accordance with: Y  ( k ) = { 1   if   X ^  ( k ) > 0  0   if   X ^  ( k ) = 0 - 1   if   X ^  ( k ) < 0   or   Y  ( k ) = { 1   if   X ^  ( k ) > T  0   if - T  ≤ X ^  ( k ) ≤ T   - 1   if   X ^  ( k ) < - T , where Y(k) are the components of the compressed residual sub-vectors and T is a small positive number that controls the amount of compression. 9. The method according to claim 8, wherein rejecting compressed residual sub-vectors that do not fulfill the predetermined criterion comprises rejecting compressed residual sub-vectors having less than a predetermined percentage of non-zero components. 10. The method according to claim 8, wherein compressed residual sub-vectors that do not fulfill the criterion: ∑ k = 1 M    Y  ( k )  ≥ 2 , where the sub-vector dimension M is 8, are rejected. 11. The method according to claim 8, wherein combining pairs of coefficients of the first virtual codebook to form the second virtual codebook comprises combining pairs of coefficients Y (k) of the first virtual codebook (VC1) in accordance with: Z  ( k ) = { sign   ( Y  ( k ) ) × (  Y  ( k )  +  Y  ( N - k )  ) if   Y  ( k ) ≠ 0 Y  ( N - k ) if   Y  ( k ) = 0  k = 0   …   N - 1 where N is the size of the first virtual codebook and Z(k) are the components of the second virtual codebook. 12. The method according to claim 8, wherein the method further includes adjusting the energy of filled non-coded residual sub-vectors to obtain a perceptual attenuation. 13. A non-transitory computer-readable medium storing a computer program comprising program instructions that when executed on a processor cause the processor to fill non-coded residual sub-vectors of a transform coded audio signal, the computer program including program instructions causing the processor to: compress coded residual sub-vectors; reject compressed residual sub-vectors that do not fulfill a predetermined criterion; concatenate the remaining compressed residual sub-vectors to form a first virtual codebook; combine pairs of coefficients of the first virtual codebook to form a second virtual codebook; and fill non-coded residual sub-vectors below a predetermined frequency with coefficients from the first virtual codebook, and fill non-coded residual sub-vectors above the predetermined frequency with coefficients from the second virtual codebook; wherein compressing coded residual sub-vectors comprises compressing components {circumflex over (X)}(k) of the coded residual sub-vectors in accordance with: Y  ( k ) = { 1   if   X ^  ( k ) > 0  0   if   X ^  ( k ) = 0 - 1   if   X ^  ( k ) < 0   or   Y  ( k ) = { 1   if   X ^  ( k ) > T  0   if - T  ≤ X ^  ( k ) ≤ T    - 1   if   X ^  ( k ) < - T , where Y(k) are the components of the compressed residual sub-vectors and T is a small positive number that controls the amount of compression.
2,600
10,239
10,239
14,772,831
2,662
A method includes increasing a dynamic range of a first sub-range ( 410, 510 ) of pixel intensity values of at least two images based on an intensity map, thereby creating at least two modified images, determining a deformation vector field between the at least two modified images, and registering the at least two images based on the deformation vector field. An image processing system ( 118 ) includes a processor ( 120 ) and a memory ( 122 ) encoded with at least one image registration instruction ( 124 ). The processor executes the at least one image registration instruction, which causes the processor to: increase a dynamic range of a first sub-range of pixel intensity values of at least two images based on an intensity map, thereby creating at least two modified images; determine a deformation vector field between the at least two modified images, and register the at least two images based on the deformation vector field.
1. A method, comprising: increasing a dynamic range of a first sub-range of pixel intensity values ant least two images based on an intensity map, thereby creating at least two modified images; determining a deformation vector field between the at least two modified images; and registering the at least two image based on the deformation vector field. 2. The method of claim 1, wherein the first sub-range of pixel intensity values are for tissues of interest with a low contrasted boundary. 3. The method of claim 1, the act of increasing the dynamic range, comprising: using the intensity map to map the first sub-range of pixel intensity values to a wider sub-range of pixel intensity values. 4. The method of claim 3, further comprising: shifting a second sub-range of pixel intensity values, which are higher than the first sub-range of pixel intensity values, by a constant value corresponding to the increase in the dynamic range of the first sub-range of pixel intensity values. 5. The method of claim 3, further comprising: shifting a first sub-set of a second sub-range of pixel intensity values, which are higher than the first sub-range of pixel intensity values, by a decreasing value such that a first value of the first sub-set correspond to the increase in the dynamic range of the first sub-range of pixel intensity values and a last value of the first sub-set is shifted by zero. 6. The method of claim 5, wherein a second sub-set of the second sub-range of pixel intensity values, which are higher than the first subset of pixel intensity values, are not shifted. 7. The method of claim 4, wherein a third sub-set of the pixel intensity values, which are lower than the first sub-range of pixel intensity values, are not shifted. 8. The method of claim 1, wherein the dynamic range is linearly increased. 9. The method of claim 1, wherein the dynamic range is non-linearly increased. 10. The method of claim 1, further comprising: increasing a dynamic range of at least one other sub-range of pixel intensity values of at least two images. Cm 11. The method of claim 1, further comprising: receiving an input identifying the tissue of interest; and selecting the intensity map from a set of intensity maps, each corresponding to a different tissue, based on the input. 12. The method of claim 1, further comprising: receiving an input identifying at least the first sub-range of pixel intensity values; and generating the intensity map based on the input. 13. The method of claim 11, wherein the input includes at least one of an anatomy of interest, an imaging protocol, an imaging application, or an imaging modality. 14. An image processing system, comprising: a processor; a memory encoded with at least one image registration instruction, wherein the processor executes the at least one image registration instruction, which causes the processor to increase a dynamic range of a first sub-range of pixel intensity values of at least two images based on an intensity map, thereby creating at least two modified images; determine a deformation vector field between the at least two modified images, and register the at least two image based on the deformation vector field. 15. The system of claim 14, wherein the first sub-range of pixel intensity values are for tissue of interest with a low contrasted boundary. 16. The system of claim 14, wherein executing the at least one image registration instruction further causes the processor to: not shift or scale a second sub-set of the pixel intensity values, which are lower than the first sub-range of pixel intensity values and shift at least a sub-portion of a third sub-range of pixel intensity values, which are higher than the first sub-range of pixel intensity values, by a value corresponding to the increase in the dynamic range of the first sub-range of pixel intensity values. 7. The system of claim 14, wherein executing the at least one image registration instruction further causes the processor to: select the intensity map from a set of intensity maps, each corresponding to a different tissue, based on an input, prior to increasing the dynamic range. 18. The system of claim 14, wherein executing the at least one image registration instruction further causes the processor to: generate the intensity map based on an input, wherein the input at least identifies the first sub-range of pixel intensity values. 19. The system of claim 17, wherein the input includes at least one of an anatomy of interest, an imaging protocol, an imaging application, or an imaging modality 20. A computer readable storage medium encoded with computer readable instructions, which, when executed by a processer, causes the processor to: obtain at least two images to register; identify tissue of interest with a low contrasted boundary; obtain an intensity map for the tissue of interest; apply the intensity map to the at least two images, wherein the intensity map increases a dynamic range of a first sub-range of pixel intensity values of at least two images, thereby creating at least two modified images; determine a deformation vector field between the at least two modified images; and register the at least two image based on the deformation vector field.
A method includes increasing a dynamic range of a first sub-range ( 410, 510 ) of pixel intensity values of at least two images based on an intensity map, thereby creating at least two modified images, determining a deformation vector field between the at least two modified images, and registering the at least two images based on the deformation vector field. An image processing system ( 118 ) includes a processor ( 120 ) and a memory ( 122 ) encoded with at least one image registration instruction ( 124 ). The processor executes the at least one image registration instruction, which causes the processor to: increase a dynamic range of a first sub-range of pixel intensity values of at least two images based on an intensity map, thereby creating at least two modified images; determine a deformation vector field between the at least two modified images, and register the at least two images based on the deformation vector field.1. A method, comprising: increasing a dynamic range of a first sub-range of pixel intensity values ant least two images based on an intensity map, thereby creating at least two modified images; determining a deformation vector field between the at least two modified images; and registering the at least two image based on the deformation vector field. 2. The method of claim 1, wherein the first sub-range of pixel intensity values are for tissues of interest with a low contrasted boundary. 3. The method of claim 1, the act of increasing the dynamic range, comprising: using the intensity map to map the first sub-range of pixel intensity values to a wider sub-range of pixel intensity values. 4. The method of claim 3, further comprising: shifting a second sub-range of pixel intensity values, which are higher than the first sub-range of pixel intensity values, by a constant value corresponding to the increase in the dynamic range of the first sub-range of pixel intensity values. 5. The method of claim 3, further comprising: shifting a first sub-set of a second sub-range of pixel intensity values, which are higher than the first sub-range of pixel intensity values, by a decreasing value such that a first value of the first sub-set correspond to the increase in the dynamic range of the first sub-range of pixel intensity values and a last value of the first sub-set is shifted by zero. 6. The method of claim 5, wherein a second sub-set of the second sub-range of pixel intensity values, which are higher than the first subset of pixel intensity values, are not shifted. 7. The method of claim 4, wherein a third sub-set of the pixel intensity values, which are lower than the first sub-range of pixel intensity values, are not shifted. 8. The method of claim 1, wherein the dynamic range is linearly increased. 9. The method of claim 1, wherein the dynamic range is non-linearly increased. 10. The method of claim 1, further comprising: increasing a dynamic range of at least one other sub-range of pixel intensity values of at least two images. Cm 11. The method of claim 1, further comprising: receiving an input identifying the tissue of interest; and selecting the intensity map from a set of intensity maps, each corresponding to a different tissue, based on the input. 12. The method of claim 1, further comprising: receiving an input identifying at least the first sub-range of pixel intensity values; and generating the intensity map based on the input. 13. The method of claim 11, wherein the input includes at least one of an anatomy of interest, an imaging protocol, an imaging application, or an imaging modality. 14. An image processing system, comprising: a processor; a memory encoded with at least one image registration instruction, wherein the processor executes the at least one image registration instruction, which causes the processor to increase a dynamic range of a first sub-range of pixel intensity values of at least two images based on an intensity map, thereby creating at least two modified images; determine a deformation vector field between the at least two modified images, and register the at least two image based on the deformation vector field. 15. The system of claim 14, wherein the first sub-range of pixel intensity values are for tissue of interest with a low contrasted boundary. 16. The system of claim 14, wherein executing the at least one image registration instruction further causes the processor to: not shift or scale a second sub-set of the pixel intensity values, which are lower than the first sub-range of pixel intensity values and shift at least a sub-portion of a third sub-range of pixel intensity values, which are higher than the first sub-range of pixel intensity values, by a value corresponding to the increase in the dynamic range of the first sub-range of pixel intensity values. 7. The system of claim 14, wherein executing the at least one image registration instruction further causes the processor to: select the intensity map from a set of intensity maps, each corresponding to a different tissue, based on an input, prior to increasing the dynamic range. 18. The system of claim 14, wherein executing the at least one image registration instruction further causes the processor to: generate the intensity map based on an input, wherein the input at least identifies the first sub-range of pixel intensity values. 19. The system of claim 17, wherein the input includes at least one of an anatomy of interest, an imaging protocol, an imaging application, or an imaging modality 20. A computer readable storage medium encoded with computer readable instructions, which, when executed by a processer, causes the processor to: obtain at least two images to register; identify tissue of interest with a low contrasted boundary; obtain an intensity map for the tissue of interest; apply the intensity map to the at least two images, wherein the intensity map increases a dynamic range of a first sub-range of pixel intensity values of at least two images, thereby creating at least two modified images; determine a deformation vector field between the at least two modified images; and register the at least two image based on the deformation vector field.
2,600
10,240
10,240
14,807,485
2,649
A method performed by a mobile lawn mowing robot includes pairing a beacon with the mobile lawn mowing robot. Pairing the beacon with the mobile lawn mowing robot includes determining a distance between the beacon and the mobile lawn mowing robot and confirming that the beacon is within a pairing distance from the mobile lawn mowing robot based on a comparison of the determined distance to a pairing distance. Pairing the beacon with the mobile robot lawn mowing robot further includes, subsequent to confirming that the beacon is within the pairing distance from the mobile lawn mowing robot, pairing the beacon with the mobile lawn mowing robot, and, following pairing, detecting wideband or ultra-wideband signals from the beacon, and using the wideband or ultra-wideband signals to enable navigation over an area.
1. A method performed by a mobile lawn mowing robot, the method comprising: pairing a beacon with the mobile lawn mowing robot, wherein pairing the beacon with the mobile lawn mowing robot comprises: determining a distance between the beacon and the mobile lawn mowing robot; confirming that the beacon is within a pairing distance from the mobile lawn mowing robot based on a comparison of the determined distance to a pairing distance; and subsequent to confirming that the beacon is within the pairing distance from the mobile lawn mowing robot, pairing the beacon with the mobile lawn mowing robot; and following pairing: detecting wideband or ultra-wideband signals from the beacon; and using the wideband or ultra-wideband signals to enable navigation over an area. 2. The method of claim 1, further comprising: outputting a request, via a user interface, for confirmation that the beacon is among a number of beacons to which the mobile lawn mowing robot is to communicate; and receiving, in response to the request, the confirmation that the beacon is among the number of beacons to which the mobile lawn mowing robot is to communicate; wherein pairing is performed following receipt of the confirmation. 3. The method of claim 1, wherein pairing the beacon with the mobile lawn mowing robot further comprises: prior to determining the distance between the beacon and the mobile lawn mowing robot, identifying a broadcast from the beacon, the broadcast comprising a beacon address for the beacon and a predefined address not specific to any beacon; in response to the broadcast, sending, to the beacon at the beacon address, a robot address for the mobile lawn mowing robot; and receiving, from the beacon, a message from which the distance between the beacon and the mobile lawn mowing robot is determined. 4. The method of claim 3, wherein pairing the beacon with the mobile lawn mowing robot further comprises: storing, in memory on the mobile lawn mowing robot, the beacon address in association with one or more other addresses for one or more other beacons paired with the mobile lawn mowing robot. 5. The method of claim 3, wherein pairing the beacon with the mobile lawn mowing robot further comprises: outputting a request to move the beacon toward the mobile lawn mowing robot, wherein the message from the beacon is received following output of the request to move the beacon. 6. The method of claim 3, further comprising, following pairing: sending the beacon address to one or more other beacons; and sending, to the beacon, one or more other addresses for the one or more other beacons. 7. The method of claim 3, further comprising: transmitting the beacon address and the robot address to a server for storage in association with a user account. 8. The method of claim 3, further comprising: identifying an error associated with the beacon; and outputting an indication of the error via a user interface. 9. The method of claim 8, wherein the user interface comprises a feature for indicating that the error is being addressed by replacing the beacon; and wherein the method further comprises: in response to the indication that the error is being addressed by replacing the beacon, causing the mobile lawn mowing robot to listen for the predefined address not specific to any beacon. 10. The method of claim 1, further comprising: receiving a passcode from the beacon; and comparing the passcode to a passcode associated with the mobile lawn mowing lawn mowing robot; wherein pairing is performed following confirmation that the passcode from the beacon matches the passcode associated with the mobile lawn mowing robot. 11-23. (canceled) 24. The method of claim 1, further comprising: receiving a status signal indicative of a battery level of the beacon; and outputting on a user interface a low battery level indication associated with the beacon. 25. The method of claim 8, wherein outputting the indication of the error via the user interface comprises causing the user interface to present a map indicating a location of the beacon. 26. The method of claim 1, further comprising, after pairing the beacon with the mobile lawn mowing robot, determining a position of the beacon relative to the area based on distances between the beacon and at least three other beacons paired with the mobile lawn mowing robot. 27. The method of claim 1, further comprising determining that a location of the beacon corresponds to a location of a previously paired beacon, wherein using the wideband or ultra-wideband signals to enable navigation over the area comprises localizing the mobile lawn mowing robot based on the location of the previously paired beacon. 28. The method of claim 1, further comprising after pairing the beacon with the mobile lawn mowing robot, storing information indicative of a perimeter of the area based on a location of the beacon. 29. The method of claim 28, further comprising, before pairing the beacon with the mobile lawn mowing robot, storing other information indicative of the perimeter of the area based on a location of a previously paired beacon, wherein storing the information indicative of the perimeter of the area comprises storing the information indicative of the perimeter of the area after determining a lack of correspondence between a location of the beacon and the location of the previously paired beacon. 30. The method of claim 1, wherein using the wideband or ultra-wideband signals comprises localizing the mobile lawn mowing robot based on the wideband or ultra-wideband signals. 31. The method of claim 1, further comprising, before using the wideband or ultra-wideband signals to enable navigation over the area, outputting a beacon check based on a number of the received wideband or ultra-wideband signals. 32. The method of claim 10, further comprising comparing the passcode to a user input, wherein pairing is performed following confirmation that the passcode from the beacon matches the user input.
A method performed by a mobile lawn mowing robot includes pairing a beacon with the mobile lawn mowing robot. Pairing the beacon with the mobile lawn mowing robot includes determining a distance between the beacon and the mobile lawn mowing robot and confirming that the beacon is within a pairing distance from the mobile lawn mowing robot based on a comparison of the determined distance to a pairing distance. Pairing the beacon with the mobile robot lawn mowing robot further includes, subsequent to confirming that the beacon is within the pairing distance from the mobile lawn mowing robot, pairing the beacon with the mobile lawn mowing robot, and, following pairing, detecting wideband or ultra-wideband signals from the beacon, and using the wideband or ultra-wideband signals to enable navigation over an area.1. A method performed by a mobile lawn mowing robot, the method comprising: pairing a beacon with the mobile lawn mowing robot, wherein pairing the beacon with the mobile lawn mowing robot comprises: determining a distance between the beacon and the mobile lawn mowing robot; confirming that the beacon is within a pairing distance from the mobile lawn mowing robot based on a comparison of the determined distance to a pairing distance; and subsequent to confirming that the beacon is within the pairing distance from the mobile lawn mowing robot, pairing the beacon with the mobile lawn mowing robot; and following pairing: detecting wideband or ultra-wideband signals from the beacon; and using the wideband or ultra-wideband signals to enable navigation over an area. 2. The method of claim 1, further comprising: outputting a request, via a user interface, for confirmation that the beacon is among a number of beacons to which the mobile lawn mowing robot is to communicate; and receiving, in response to the request, the confirmation that the beacon is among the number of beacons to which the mobile lawn mowing robot is to communicate; wherein pairing is performed following receipt of the confirmation. 3. The method of claim 1, wherein pairing the beacon with the mobile lawn mowing robot further comprises: prior to determining the distance between the beacon and the mobile lawn mowing robot, identifying a broadcast from the beacon, the broadcast comprising a beacon address for the beacon and a predefined address not specific to any beacon; in response to the broadcast, sending, to the beacon at the beacon address, a robot address for the mobile lawn mowing robot; and receiving, from the beacon, a message from which the distance between the beacon and the mobile lawn mowing robot is determined. 4. The method of claim 3, wherein pairing the beacon with the mobile lawn mowing robot further comprises: storing, in memory on the mobile lawn mowing robot, the beacon address in association with one or more other addresses for one or more other beacons paired with the mobile lawn mowing robot. 5. The method of claim 3, wherein pairing the beacon with the mobile lawn mowing robot further comprises: outputting a request to move the beacon toward the mobile lawn mowing robot, wherein the message from the beacon is received following output of the request to move the beacon. 6. The method of claim 3, further comprising, following pairing: sending the beacon address to one or more other beacons; and sending, to the beacon, one or more other addresses for the one or more other beacons. 7. The method of claim 3, further comprising: transmitting the beacon address and the robot address to a server for storage in association with a user account. 8. The method of claim 3, further comprising: identifying an error associated with the beacon; and outputting an indication of the error via a user interface. 9. The method of claim 8, wherein the user interface comprises a feature for indicating that the error is being addressed by replacing the beacon; and wherein the method further comprises: in response to the indication that the error is being addressed by replacing the beacon, causing the mobile lawn mowing robot to listen for the predefined address not specific to any beacon. 10. The method of claim 1, further comprising: receiving a passcode from the beacon; and comparing the passcode to a passcode associated with the mobile lawn mowing lawn mowing robot; wherein pairing is performed following confirmation that the passcode from the beacon matches the passcode associated with the mobile lawn mowing robot. 11-23. (canceled) 24. The method of claim 1, further comprising: receiving a status signal indicative of a battery level of the beacon; and outputting on a user interface a low battery level indication associated with the beacon. 25. The method of claim 8, wherein outputting the indication of the error via the user interface comprises causing the user interface to present a map indicating a location of the beacon. 26. The method of claim 1, further comprising, after pairing the beacon with the mobile lawn mowing robot, determining a position of the beacon relative to the area based on distances between the beacon and at least three other beacons paired with the mobile lawn mowing robot. 27. The method of claim 1, further comprising determining that a location of the beacon corresponds to a location of a previously paired beacon, wherein using the wideband or ultra-wideband signals to enable navigation over the area comprises localizing the mobile lawn mowing robot based on the location of the previously paired beacon. 28. The method of claim 1, further comprising after pairing the beacon with the mobile lawn mowing robot, storing information indicative of a perimeter of the area based on a location of the beacon. 29. The method of claim 28, further comprising, before pairing the beacon with the mobile lawn mowing robot, storing other information indicative of the perimeter of the area based on a location of a previously paired beacon, wherein storing the information indicative of the perimeter of the area comprises storing the information indicative of the perimeter of the area after determining a lack of correspondence between a location of the beacon and the location of the previously paired beacon. 30. The method of claim 1, wherein using the wideband or ultra-wideband signals comprises localizing the mobile lawn mowing robot based on the wideband or ultra-wideband signals. 31. The method of claim 1, further comprising, before using the wideband or ultra-wideband signals to enable navigation over the area, outputting a beacon check based on a number of the received wideband or ultra-wideband signals. 32. The method of claim 10, further comprising comparing the passcode to a user input, wherein pairing is performed following confirmation that the passcode from the beacon matches the user input.
2,600
10,241
10,241
16,106,206
2,689
A system includes a first computing device having a first non-transitory machine-readable storage medium, first communication circuitry, and at least one first processor in communication with the first non-transitory machine-readable storage medium and the first communication circuitry. The at least one first processor is configured to execute instructions stored in the first non-transitory machine-readable storage medium to cause the first communication circuitry to receive a first signal from a first transmission medium, calculate a first authentication value for an object based on data included in the first signal, and cause the first communication circuitry to transmit a second signal to the first transmission medium. The second signal identifies whether the object is authentic based, at least in part, on the first authentication value.
1. A system, comprising: a first plurality of processing devices communicatively coupled to one another by way of a network, each of the first plurality of processing devices configured to: receive a first message from at least one of a second plurality of processing devices, the request including data concerning a transaction; generate, in response to the first message, a first value, wherein the first value is based, at least in part, on data included in the first message; perform a comparison of the first value to a second value stored in a respective database communicatively coupled to the processing device; and transmit a second message to the at least one of the second plurality of processing devices, the second message including data indicative of an outcome of the comparison. 2. The system of claim 1, wherein the transaction includes a purchase of an object, and wherein the transaction is initiated by the at least one of the second plurality of processing devices. 3. The system of claim 2, wherein the first value is an authentication value for determining whether the object is authentic. 4. The system of claim 1, wherein the second value is an authentication value for an object. 5. The system of claim 4, wherein the second value is stored in the respective database in encrypted form. 6. The system of claim 1, wherein each of the first plurality of processing devices is configured to update the respective database periodically with authentication values for a number of objects. 7. The system of claim 1, wherein the database stores version history and tracking information for each of a plurality of objects. 8. A system, comprising: a first computing device communicatively coupled to a network and to a first database, the first computing device configured to: receive a first message, the first message including data concerning a transaction; generate, in response to the first message, a first value, wherein the first value is based, at least in part, on data included in the first message; compare the first value and a second value accessed from the first database; and a second computing device communicatively coupled to the network and to a second database, wherein the second computing device is configured to: receive a second message from the first computing device, wherein the second message is generated by the first computing device in response to comparing the first value and the second value, and wherein the second computing device is configured to compare at least one of the first value and the second value to a third value accessed from the second database. 9. The system of claim 8, wherein the second computing device is configured to transmit a third message to the first computing device, the third message including a result based on a comparison of the at least one of the first value and the second value to the third value. 10. The system of claim 8, wherein at least one of the first database and the second database is configured store data concerning at least one characteristic of an object. 11. The system of claim 10, wherein the at least one characteristic includes tracking data for the object. 12. The system of claim 10, wherein the at least on characteristic includes at least one of a bar code number associated with the object and a serial number of the object. 13. The system of claim 10, wherein the second computing device is configured to transmit a third message to a computing device that initiated the transaction and from which the first computing device received the first message, the third message including a result based on a comparison of the at least one of the first value and the second value to the third value. 14. The system of claim 8, wherein the second computing device is configured to determine, based at least in part on comparing the first value and the third value, whether an object is likely to be counterfeit. 15. The system of claim 8, wherein the second computer device is configured to determine, based at least in part on comparing the first value and the third value, a likelihood that an object involved in the transaction is authentic. 16. A distributed system comprising a plurality of computers coupled to a network, wherein each of the computers in the distributed system is configured to: maintain a database with data concerning a first object; receive a first request via the network, the first request concerning whether the first object is authentic; generate a first response to the first request, wherein data included in the first response is based, at least in part, on data maintained in the database; and transmit the first response via the network. 17. The distributed system of claim 16, wherein the data in the database includes tracking data and identification data. 18. The distributed system of claim 17, wherein identification data includes data describing a physical appearance of the first object. 19. The distributed system of claim 16, wherein the database includes data concerning a plurality of objects. 20. The distributed system of claim 16, wherein each of the computers in the distributed system is configured to: extract data from the first request; calculate a first authentication value using data extracted from the first request; compare the first authentication value to a second authentication value, the second authentication value derived from the data maintained in the database; determine, based on a result of the comparing of the first authentication value to the second authentication value, whether the object is authentic; and update the database based on the first response.
A system includes a first computing device having a first non-transitory machine-readable storage medium, first communication circuitry, and at least one first processor in communication with the first non-transitory machine-readable storage medium and the first communication circuitry. The at least one first processor is configured to execute instructions stored in the first non-transitory machine-readable storage medium to cause the first communication circuitry to receive a first signal from a first transmission medium, calculate a first authentication value for an object based on data included in the first signal, and cause the first communication circuitry to transmit a second signal to the first transmission medium. The second signal identifies whether the object is authentic based, at least in part, on the first authentication value.1. A system, comprising: a first plurality of processing devices communicatively coupled to one another by way of a network, each of the first plurality of processing devices configured to: receive a first message from at least one of a second plurality of processing devices, the request including data concerning a transaction; generate, in response to the first message, a first value, wherein the first value is based, at least in part, on data included in the first message; perform a comparison of the first value to a second value stored in a respective database communicatively coupled to the processing device; and transmit a second message to the at least one of the second plurality of processing devices, the second message including data indicative of an outcome of the comparison. 2. The system of claim 1, wherein the transaction includes a purchase of an object, and wherein the transaction is initiated by the at least one of the second plurality of processing devices. 3. The system of claim 2, wherein the first value is an authentication value for determining whether the object is authentic. 4. The system of claim 1, wherein the second value is an authentication value for an object. 5. The system of claim 4, wherein the second value is stored in the respective database in encrypted form. 6. The system of claim 1, wherein each of the first plurality of processing devices is configured to update the respective database periodically with authentication values for a number of objects. 7. The system of claim 1, wherein the database stores version history and tracking information for each of a plurality of objects. 8. A system, comprising: a first computing device communicatively coupled to a network and to a first database, the first computing device configured to: receive a first message, the first message including data concerning a transaction; generate, in response to the first message, a first value, wherein the first value is based, at least in part, on data included in the first message; compare the first value and a second value accessed from the first database; and a second computing device communicatively coupled to the network and to a second database, wherein the second computing device is configured to: receive a second message from the first computing device, wherein the second message is generated by the first computing device in response to comparing the first value and the second value, and wherein the second computing device is configured to compare at least one of the first value and the second value to a third value accessed from the second database. 9. The system of claim 8, wherein the second computing device is configured to transmit a third message to the first computing device, the third message including a result based on a comparison of the at least one of the first value and the second value to the third value. 10. The system of claim 8, wherein at least one of the first database and the second database is configured store data concerning at least one characteristic of an object. 11. The system of claim 10, wherein the at least one characteristic includes tracking data for the object. 12. The system of claim 10, wherein the at least on characteristic includes at least one of a bar code number associated with the object and a serial number of the object. 13. The system of claim 10, wherein the second computing device is configured to transmit a third message to a computing device that initiated the transaction and from which the first computing device received the first message, the third message including a result based on a comparison of the at least one of the first value and the second value to the third value. 14. The system of claim 8, wherein the second computing device is configured to determine, based at least in part on comparing the first value and the third value, whether an object is likely to be counterfeit. 15. The system of claim 8, wherein the second computer device is configured to determine, based at least in part on comparing the first value and the third value, a likelihood that an object involved in the transaction is authentic. 16. A distributed system comprising a plurality of computers coupled to a network, wherein each of the computers in the distributed system is configured to: maintain a database with data concerning a first object; receive a first request via the network, the first request concerning whether the first object is authentic; generate a first response to the first request, wherein data included in the first response is based, at least in part, on data maintained in the database; and transmit the first response via the network. 17. The distributed system of claim 16, wherein the data in the database includes tracking data and identification data. 18. The distributed system of claim 17, wherein identification data includes data describing a physical appearance of the first object. 19. The distributed system of claim 16, wherein the database includes data concerning a plurality of objects. 20. The distributed system of claim 16, wherein each of the computers in the distributed system is configured to: extract data from the first request; calculate a first authentication value using data extracted from the first request; compare the first authentication value to a second authentication value, the second authentication value derived from the data maintained in the database; determine, based on a result of the comparing of the first authentication value to the second authentication value, whether the object is authentic; and update the database based on the first response.
2,600
10,242
10,242
15,913,908
2,642
A method of operating a real time locating system (RTLS) includes: receiving, by an RTLS tagged object, RTLS coordinates of the RTLS tagged object within an RTLS perimeter; tracking inertial measurement unit (IMU) data by the RTLS tagged object; determining, by the RTLS tagged object, RTLS coordinate error in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object using the error.
1. A method of operating a real time locating system (RTLS), the method comprising: receiving, by an RTLS tagged object, RTLS coordinates of the RTLS tagged object within an RTLS perimeter; tracking inertial measurement unit (IMU) data by the RTLS tagged object; determining, by the RTLS tagged object, RTLS coordinate error in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object using the error. 2. The method of claim 1, further comprising: sending the corrected RTLS coordinates of the RTLS tagged object to an RTLS server, in response to determining the corrected RTLS coordinates. 3. The method of claim 1, further comprising: determining an incorrect direction of movement in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object by correcting the direction of movement. 4. The method of claim 1, further comprising: determining an overshoot in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object by correcting for the overshoot. 5. The method of claim 1, further comprising: receiving delayed RTLS coordinates from the RTLS server; determining corrected RTLS coordinates for the RTLS tagged object using the IMU data; and sending the corrected RTLS coordinates of the RTLS tagged object to the RTLS server. 6. A mobile device, comprising: an RTLS tag; an inertial measurement unit (IMU); a processor, operatively coupled to the RTLS tag and to the IMU, the processor operative to: receive, via the RTLS tag, RTLS coordinates of the an RTLS tagged object within an RTLS perimeter; track IMU data as the mobile device moves with the RTLS perimeter; determine RTLS coordinate error in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates for the RTLS tagged object using the error. 7. The mobile device of claim 6, wherein the processor is further operative to: send the corrected RTLS coordinates to an RTLS server, in response to determining the corrected RTLS coordinates. 8. The mobile device of claim 6, wherein the processor is further operative to: determine an incorrect direction of movement in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates by correcting the direction of movement. 9. The mobile device of claim 6, wherein the processor is further operative to: determine an overshoot in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates for the RTLS tagged object by correcting for the overshoot. 10. The mobile device of claim 6, wherein the processor is further operative to: receive delayed RTLS coordinates from an RTLS server; determine corrected RTLS coordinates using the IMU data; and send the corrected RTLS coordinates to the RTLS server. 11. A mobile device, comprising: an RTLS tag; an inertial measurement unit (IMU); a processor, operatively coupled to the RTLS tag and to the IMU, the processor operative to: receive, via the RTLS tag, RTLS coordinates of the mobile device within an RTLS perimeter; track inertial measurement unit (IMU) data along with the RTLS coordinates to determine jitter, overshoot, latency and direction errors in the RTLS coordinates; determine, RTLS coordinate error in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates for the mobile device correcting for jitter, overshoot, latency and direction errors using the determined RTLS coordinate error.
A method of operating a real time locating system (RTLS) includes: receiving, by an RTLS tagged object, RTLS coordinates of the RTLS tagged object within an RTLS perimeter; tracking inertial measurement unit (IMU) data by the RTLS tagged object; determining, by the RTLS tagged object, RTLS coordinate error in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object using the error.1. A method of operating a real time locating system (RTLS), the method comprising: receiving, by an RTLS tagged object, RTLS coordinates of the RTLS tagged object within an RTLS perimeter; tracking inertial measurement unit (IMU) data by the RTLS tagged object; determining, by the RTLS tagged object, RTLS coordinate error in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object using the error. 2. The method of claim 1, further comprising: sending the corrected RTLS coordinates of the RTLS tagged object to an RTLS server, in response to determining the corrected RTLS coordinates. 3. The method of claim 1, further comprising: determining an incorrect direction of movement in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object by correcting the direction of movement. 4. The method of claim 1, further comprising: determining an overshoot in the RTLS coordinates using the IMU data; and determining corrected RTLS coordinates for the RTLS tagged object by correcting for the overshoot. 5. The method of claim 1, further comprising: receiving delayed RTLS coordinates from the RTLS server; determining corrected RTLS coordinates for the RTLS tagged object using the IMU data; and sending the corrected RTLS coordinates of the RTLS tagged object to the RTLS server. 6. A mobile device, comprising: an RTLS tag; an inertial measurement unit (IMU); a processor, operatively coupled to the RTLS tag and to the IMU, the processor operative to: receive, via the RTLS tag, RTLS coordinates of the an RTLS tagged object within an RTLS perimeter; track IMU data as the mobile device moves with the RTLS perimeter; determine RTLS coordinate error in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates for the RTLS tagged object using the error. 7. The mobile device of claim 6, wherein the processor is further operative to: send the corrected RTLS coordinates to an RTLS server, in response to determining the corrected RTLS coordinates. 8. The mobile device of claim 6, wherein the processor is further operative to: determine an incorrect direction of movement in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates by correcting the direction of movement. 9. The mobile device of claim 6, wherein the processor is further operative to: determine an overshoot in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates for the RTLS tagged object by correcting for the overshoot. 10. The mobile device of claim 6, wherein the processor is further operative to: receive delayed RTLS coordinates from an RTLS server; determine corrected RTLS coordinates using the IMU data; and send the corrected RTLS coordinates to the RTLS server. 11. A mobile device, comprising: an RTLS tag; an inertial measurement unit (IMU); a processor, operatively coupled to the RTLS tag and to the IMU, the processor operative to: receive, via the RTLS tag, RTLS coordinates of the mobile device within an RTLS perimeter; track inertial measurement unit (IMU) data along with the RTLS coordinates to determine jitter, overshoot, latency and direction errors in the RTLS coordinates; determine, RTLS coordinate error in the RTLS coordinates using the IMU data; and determine corrected RTLS coordinates for the mobile device correcting for jitter, overshoot, latency and direction errors using the determined RTLS coordinate error.
2,600
10,243
10,243
14,923,568
2,675
A user is enabled to make different selection as to whether or not to include a blank page for each data processing. A storage portion stores setting information indicating whether or not to perform data processing of image data with a blank page excluded for each data processing, and a control portion judges whether or not each data processing is processing performed with a blank page of the image data excluded based on the setting information stored in the storage portion, and executes first data processing which is judged as processing performed with the blank page excluded by excluding the blank page as well as executes second data processing which is judged as processing performed without excluding the blank page concurrently with the first data processing by including the blank page.
1. An image processing apparatus performing a plurality of different data processings for image data including a blank page, comprising: a storage portion for storing setting information indicating whether or not to perform data processing of the image data with a blank page excluded for each of the data processing; and a control portion for judging whether or not each data processing is a processing to be performed with a blank page of the image data excluded based on the setting information stored in the storage portion, executing first data processing which is judged as a processing to be performed with the blank page excluded by excluding the blank page, and executing a second data processing which is judged as processing performed without excluding the blank page, by including the blank page, wherein the plurality of different data processings are processings performed by a plurality of different functions of the image processing apparatus, the plurality of different functions include at least two of a document filing function of image data, a data transmission function through a network, a print function, a copy function, a facsimile function, and an electronic mail transmission function, the first data processing is executed by at least one of the document filing function of image data, the data transmission function through the network, the print function, the copy function, and the electronic mail transmission function, and the second data processing is executed by the facsimile function. 2. The image processing apparatus as defined in claim 1, wherein the control portion executes first display processing for displaying, by excluding the blank page, image data which is a processing target of the first data processing which is judged as processing performed with the blank page excluded, and executes second display processing for displaying, by including the blank page, image data which is a processing target of the second data processing which is judged as processing performed without excluding the blank page concurrently with the first display processing. 3. The image processing apparatus as defined in claim 2, wherein the control portion switches the first display processing, the second display processing executed concurrently with the first display processing, and display processing for displaying each page of the image data with the blank page included, based on an instruction from a user. 4. The image processing apparatus as defined in claim 2, wherein the control portion displays each page of image data serving as a target of the first data processing and the second data processing by thumbnail display. 5. The image processing apparatus as defined in claim 1, wherein the control portion switches processing to be executed from the first data processing or the second data processing to third data processing based on an instruction from a user, and when the third data processing is judged as processing executed with a blank page of the image data excluded, executes the third data processing with the blank page excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes the third data processing without excluding the blank page. 6. The image processing apparatus as defined in claim 5, wherein the control portion displays the image data as a processing target of the third data processing with the blank page excluded when the third data processing is judged as processing executed with a blank page of the image data excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes third display processing for displaying the image data as a processing target of the third data processing without excluding the blank page, in place of the first display processing corresponding to the first data processing or the second display processing corresponding to the second data processing which has been switched. 7. The image processing apparatus as defined in claim 1, wherein a plurality of the different data processing includes data processing in which a destination to which image data is transmitted is different. 8. An image processing method for performing a plurality of different data processings for image data including a blank page, comprising: an information reading step of reading out setting information from a storage portion storing the setting information indicating whether or not to perform data processing of the image data with a blank page excluded for each of the data processing; and a data processing executing step of judging whether or not each data processing is a processing to be performed with a blank page of the image data excluded based on the setting information read out at the information reading step, and executing first data processing which is judged as a processing to be performed with the blank page excluded by excluding the blank page, and executing a second data processing which is judged as processing performed without excluding the blank page, by including the blank page, wherein the plurality of different data processings are processings performed by a plurality of different functions of an image processing apparatus, the plurality of different functions include at least two of a document filing function of image data, a data transmission function through a network, a print function, a copy function, a facsimile function, and an electronic mail transmission function, the first data processing, is executed by at least one of the document filing function of image data, the data transmission function through the network, the print function, the copy function, and the electronic mail transmission function, and the second data processing is executed by the facsimile function. 9. The image processing method as defined in claim 8, further comprising: at the data processing executing step, a display step of executing first display processing for displaying, by excluding the blank page, image data which is a processing target of the first data processing which is judged as processing performed with the blank page excluded, and executing second display processing for displaying, by including the blank page, image data which is a processing target of the second data processing which is judged as processing performed without excluding the blank page concurrently with the first display processing. 10. An image transmission apparatus performing a plurality of different data processings for image data including a blank page, comprising: a storage portion for storing setting information indicating whether or not to perform data processing of the image data with a blank page excluded for each of the data processing; a control portion for judging whether or not each data processing is a processing to be performed with a blank page of the image data excluded based on the setting information stored in the storage portion, executing first data processing which is judged as a processing to be performed with the blank page excluded by excluding the blank page, and executing a second data processing which is judged as processing performed without excluding the blank page, by including the blank page, wherein the plurality of different data processings are processings performed by a plurality of different functions of the image transmission apparatus, the plurality of different functions include at least two of a document filing function of image data, a data transmission function through a network, a print function, a copy function, a facsimile function, and an electronic mail transmission function, the first data processing is executed by at least one of the document filing function of image data, the data transmission function through the network, the print function, the copy function, and the electronic mail transmission function, and the second data processing is executed by the facsimile function. 11. The image transmission apparatus as defined in claim 10, wherein the control portion executes first display processing for displaying, by excluding the blank page, image data which is a processing target of the first data processing which is judged as processing performed with the blank page excluded, and executes second display processing for displaying, by including the blank page, image data which is a processing target of the second data processing which is judged as processing performed without excluding the blank page concurrently with the first display processing. 12. The image transmission apparatus as defined in claim 11, wherein the control portion switches the first display processing, the second display processing executed concurrently with the first display processing, and display processing for displaying each page of the image data with the blank page included, based on an instruction from a user. 13. The image transmission apparatus as defined in claim 11, wherein the control portion displays each page of image data serving as a target of the first data processing and the second data processing by thumbnail display. 14. The image transmission apparatus as defined in claim 10, wherein the control portion switches processing to be executed from the first data processing or the second data processing to third data processing based on an instruction from a user, and when the third data processing is judged as processing executed with a blank page of the image data excluded, executes the third data processing with the blank page excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes the third data processing without excluding the blank page. 15. The image transmission apparatus as defined in claim 14, wherein the control portion displays the image data as a processing target of the third data processing with the blank page excluded when the third data processing is judged as processing executed with a blank page of the image data excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes third display processing for displaying the image data as a processing target of the third data processing without excluding the blank page, in place of the first display processing corresponding to the first data processing or the second display processing corresponding to the second data processing which has been switched. 16. The image transmission apparatus as defined in claim 10, wherein a plurality of the different data processing includes data processing in which a destination to which image data is transmitted is different.
A user is enabled to make different selection as to whether or not to include a blank page for each data processing. A storage portion stores setting information indicating whether or not to perform data processing of image data with a blank page excluded for each data processing, and a control portion judges whether or not each data processing is processing performed with a blank page of the image data excluded based on the setting information stored in the storage portion, and executes first data processing which is judged as processing performed with the blank page excluded by excluding the blank page as well as executes second data processing which is judged as processing performed without excluding the blank page concurrently with the first data processing by including the blank page.1. An image processing apparatus performing a plurality of different data processings for image data including a blank page, comprising: a storage portion for storing setting information indicating whether or not to perform data processing of the image data with a blank page excluded for each of the data processing; and a control portion for judging whether or not each data processing is a processing to be performed with a blank page of the image data excluded based on the setting information stored in the storage portion, executing first data processing which is judged as a processing to be performed with the blank page excluded by excluding the blank page, and executing a second data processing which is judged as processing performed without excluding the blank page, by including the blank page, wherein the plurality of different data processings are processings performed by a plurality of different functions of the image processing apparatus, the plurality of different functions include at least two of a document filing function of image data, a data transmission function through a network, a print function, a copy function, a facsimile function, and an electronic mail transmission function, the first data processing is executed by at least one of the document filing function of image data, the data transmission function through the network, the print function, the copy function, and the electronic mail transmission function, and the second data processing is executed by the facsimile function. 2. The image processing apparatus as defined in claim 1, wherein the control portion executes first display processing for displaying, by excluding the blank page, image data which is a processing target of the first data processing which is judged as processing performed with the blank page excluded, and executes second display processing for displaying, by including the blank page, image data which is a processing target of the second data processing which is judged as processing performed without excluding the blank page concurrently with the first display processing. 3. The image processing apparatus as defined in claim 2, wherein the control portion switches the first display processing, the second display processing executed concurrently with the first display processing, and display processing for displaying each page of the image data with the blank page included, based on an instruction from a user. 4. The image processing apparatus as defined in claim 2, wherein the control portion displays each page of image data serving as a target of the first data processing and the second data processing by thumbnail display. 5. The image processing apparatus as defined in claim 1, wherein the control portion switches processing to be executed from the first data processing or the second data processing to third data processing based on an instruction from a user, and when the third data processing is judged as processing executed with a blank page of the image data excluded, executes the third data processing with the blank page excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes the third data processing without excluding the blank page. 6. The image processing apparatus as defined in claim 5, wherein the control portion displays the image data as a processing target of the third data processing with the blank page excluded when the third data processing is judged as processing executed with a blank page of the image data excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes third display processing for displaying the image data as a processing target of the third data processing without excluding the blank page, in place of the first display processing corresponding to the first data processing or the second display processing corresponding to the second data processing which has been switched. 7. The image processing apparatus as defined in claim 1, wherein a plurality of the different data processing includes data processing in which a destination to which image data is transmitted is different. 8. An image processing method for performing a plurality of different data processings for image data including a blank page, comprising: an information reading step of reading out setting information from a storage portion storing the setting information indicating whether or not to perform data processing of the image data with a blank page excluded for each of the data processing; and a data processing executing step of judging whether or not each data processing is a processing to be performed with a blank page of the image data excluded based on the setting information read out at the information reading step, and executing first data processing which is judged as a processing to be performed with the blank page excluded by excluding the blank page, and executing a second data processing which is judged as processing performed without excluding the blank page, by including the blank page, wherein the plurality of different data processings are processings performed by a plurality of different functions of an image processing apparatus, the plurality of different functions include at least two of a document filing function of image data, a data transmission function through a network, a print function, a copy function, a facsimile function, and an electronic mail transmission function, the first data processing, is executed by at least one of the document filing function of image data, the data transmission function through the network, the print function, the copy function, and the electronic mail transmission function, and the second data processing is executed by the facsimile function. 9. The image processing method as defined in claim 8, further comprising: at the data processing executing step, a display step of executing first display processing for displaying, by excluding the blank page, image data which is a processing target of the first data processing which is judged as processing performed with the blank page excluded, and executing second display processing for displaying, by including the blank page, image data which is a processing target of the second data processing which is judged as processing performed without excluding the blank page concurrently with the first display processing. 10. An image transmission apparatus performing a plurality of different data processings for image data including a blank page, comprising: a storage portion for storing setting information indicating whether or not to perform data processing of the image data with a blank page excluded for each of the data processing; a control portion for judging whether or not each data processing is a processing to be performed with a blank page of the image data excluded based on the setting information stored in the storage portion, executing first data processing which is judged as a processing to be performed with the blank page excluded by excluding the blank page, and executing a second data processing which is judged as processing performed without excluding the blank page, by including the blank page, wherein the plurality of different data processings are processings performed by a plurality of different functions of the image transmission apparatus, the plurality of different functions include at least two of a document filing function of image data, a data transmission function through a network, a print function, a copy function, a facsimile function, and an electronic mail transmission function, the first data processing is executed by at least one of the document filing function of image data, the data transmission function through the network, the print function, the copy function, and the electronic mail transmission function, and the second data processing is executed by the facsimile function. 11. The image transmission apparatus as defined in claim 10, wherein the control portion executes first display processing for displaying, by excluding the blank page, image data which is a processing target of the first data processing which is judged as processing performed with the blank page excluded, and executes second display processing for displaying, by including the blank page, image data which is a processing target of the second data processing which is judged as processing performed without excluding the blank page concurrently with the first display processing. 12. The image transmission apparatus as defined in claim 11, wherein the control portion switches the first display processing, the second display processing executed concurrently with the first display processing, and display processing for displaying each page of the image data with the blank page included, based on an instruction from a user. 13. The image transmission apparatus as defined in claim 11, wherein the control portion displays each page of image data serving as a target of the first data processing and the second data processing by thumbnail display. 14. The image transmission apparatus as defined in claim 10, wherein the control portion switches processing to be executed from the first data processing or the second data processing to third data processing based on an instruction from a user, and when the third data processing is judged as processing executed with a blank page of the image data excluded, executes the third data processing with the blank page excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes the third data processing without excluding the blank page. 15. The image transmission apparatus as defined in claim 14, wherein the control portion displays the image data as a processing target of the third data processing with the blank page excluded when the third data processing is judged as processing executed with a blank page of the image data excluded, and when the third data processing is judged as processing executed without excluding the blank page of the image data, executes third display processing for displaying the image data as a processing target of the third data processing without excluding the blank page, in place of the first display processing corresponding to the first data processing or the second display processing corresponding to the second data processing which has been switched. 16. The image transmission apparatus as defined in claim 10, wherein a plurality of the different data processing includes data processing in which a destination to which image data is transmitted is different.
2,600
10,244
10,244
14,943,613
2,613
The embodiments described herein provide devices and methods for image processing. Specifically, the embodiments described herein provide techniques for blending graphical layers together into an image for display. In general, these techniques utilize multiple display control units to blend together more layers than could be achieved using a single display control unit. This blending of additional layers can provide improved image quality compared to traditional techniques that use only the blending capability of a single display control unit.
1. A processing unit, comprising: a first display control unit, wherein the first display control unit is configured to receive a first plurality of graphical layers and blend the first plurality of graphical layers to generate a first composed image; a memory unit, the memory unit coupled to the first display control unit and configured to receive and store the first composed image; and a second display control unit, wherein the second display control unit is coupled to the memory unit and is configured to receive a second plurality of graphical layers and the first composed image and blend the second plurality of graphical layers and the first composed image to generate a final composed image. 2. The processing unit of claim 1, wherein the memory unit includes a first memory buffer and a second memory buffer, and wherein the first display control unit is configured to switch between writing the first composed image to the first memory buffer and writing the first composed image to the second memory buffer. 3. The processing unit of claim 1, wherein the memory unit includes a first memory buffer and a second memory buffer, and wherein the second display control unit is configured to switch between receiving the first composed image from the first memory buffer and receiving the first composed image from the second memory buffer. 4. The processing unit of claim 1, wherein the memory unit is configured to receive the first composed image by being configured to read the first composed image from the memory unit. 5. The processing unit of claim 1, further comprising a buffer switch module, and wherein the memory unit includes a first memory buffer and a second memory buffer, where the first display control unit is configured to selectively write the first composed image to either the first memory buffer or the second memory buffer as selected by the buffer switch module, and where the second display control unit is configured to selective read the first composed image from either the first memory buffer or the second memory buffer as selected by buffer switch module. 6. The processing unit of claim 5, wherein the buffer switch module alternates the writing of the first composed image between the first memory buffer and the second memory buffer, and wherein the buffer switch module alternates in opposite fashion the reading of the first composed image between the second memory buffer and the first memory buffer. 7. The processing unit of claim 1, wherein the first display control unit includes M inputs for receiving the plurality of graphical layers, and wherein the second display control unit includes N inputs for receiving the second plurality of graphical layers and the first composed image, such that the final composed image can include data from M+N−1 graphical layers. 8. The processing unit of claim 7, wherein the M inputs and N inputs each comprise a direct memory access (DMA) channel. 9. The processing unit of claim 1, wherein the processing unit is integrated with at least one general purpose processor on a system on chip (SoC) device. 10. The processing unit of claim 1, further comprising a display coupled to the processing, the display configured to display the final composed image. 11. A processing unit, comprising: a memory unit, the memory unit including a first memory buffer and a second memory buffer; buffer switch module coupled to the memory unit; a first display control unit coupled to the memory unit, wherein the first display control unit is configured to receive a first plurality of graphical layers and blend the first plurality of graphical layers together to generate first composed images, the first display control unit further configured to selectively write each of the first composed images to either the first memory buffer or the second memory buffer as selected by the buffer switch module; and a second display control unit coupled to the memory unit, wherein the second display control unit is configured to selectively read each of the first composed images from either the first memory buffer or the second memory buffer as selected by the buffer switch module, the second display control unit further configured to receive a second plurality of graphical layers and blend the second plurality of graphical layers together with the first composed image to generate second composed images, the second display control unit further configured to output the second composed images to a display interface. 12. The processing unit of claim 11, wherein the processing unit is integrated with at least one general purpose processor on a system on chip (SoC) device. 13. The processing unit of claim 12, wherein the buffer switch module is implemented with the general purpose processor. 14. A method, comprising: blending a first plurality of graphical layers to generate a first composed image; writing the first composed image to a memory; reading the first composed image from the memory; and blending a second plurality of graphical layers with the first composed image to generate a final composed image. 15. The method of claim 14, wherein the writing the first composed image to the memory comprises selectively writing to one of a first memory buffer and a second memory buffer. 16. The method of claim 15, wherein the reading the first composed image from the memory comprises selectively reading from one of the first memory buffer and the second memory buffer. 17. The method of claim 16, wherein the selectively reading and selectively writing comprises alternating the writing of the first composed image between the first memory buffer and the second memory buffer, and alternating in opposite fashion the reading of the first composed image between the second memory buffer and the first memory buffer. 18. The method of claim 14, wherein the first plurality of graphical layers comprises M layers, and wherein the second plurality of graphics layers comprise N layers, and wherein the final composed image can include data from M+N−1 layers. 19. The method of claim 14, further comprising receiving the first plurality of graphical layers via direct memory access (DMA) channels. 20. The method of claim 14, further comprising outputting the final composed image to a display.
The embodiments described herein provide devices and methods for image processing. Specifically, the embodiments described herein provide techniques for blending graphical layers together into an image for display. In general, these techniques utilize multiple display control units to blend together more layers than could be achieved using a single display control unit. This blending of additional layers can provide improved image quality compared to traditional techniques that use only the blending capability of a single display control unit.1. A processing unit, comprising: a first display control unit, wherein the first display control unit is configured to receive a first plurality of graphical layers and blend the first plurality of graphical layers to generate a first composed image; a memory unit, the memory unit coupled to the first display control unit and configured to receive and store the first composed image; and a second display control unit, wherein the second display control unit is coupled to the memory unit and is configured to receive a second plurality of graphical layers and the first composed image and blend the second plurality of graphical layers and the first composed image to generate a final composed image. 2. The processing unit of claim 1, wherein the memory unit includes a first memory buffer and a second memory buffer, and wherein the first display control unit is configured to switch between writing the first composed image to the first memory buffer and writing the first composed image to the second memory buffer. 3. The processing unit of claim 1, wherein the memory unit includes a first memory buffer and a second memory buffer, and wherein the second display control unit is configured to switch between receiving the first composed image from the first memory buffer and receiving the first composed image from the second memory buffer. 4. The processing unit of claim 1, wherein the memory unit is configured to receive the first composed image by being configured to read the first composed image from the memory unit. 5. The processing unit of claim 1, further comprising a buffer switch module, and wherein the memory unit includes a first memory buffer and a second memory buffer, where the first display control unit is configured to selectively write the first composed image to either the first memory buffer or the second memory buffer as selected by the buffer switch module, and where the second display control unit is configured to selective read the first composed image from either the first memory buffer or the second memory buffer as selected by buffer switch module. 6. The processing unit of claim 5, wherein the buffer switch module alternates the writing of the first composed image between the first memory buffer and the second memory buffer, and wherein the buffer switch module alternates in opposite fashion the reading of the first composed image between the second memory buffer and the first memory buffer. 7. The processing unit of claim 1, wherein the first display control unit includes M inputs for receiving the plurality of graphical layers, and wherein the second display control unit includes N inputs for receiving the second plurality of graphical layers and the first composed image, such that the final composed image can include data from M+N−1 graphical layers. 8. The processing unit of claim 7, wherein the M inputs and N inputs each comprise a direct memory access (DMA) channel. 9. The processing unit of claim 1, wherein the processing unit is integrated with at least one general purpose processor on a system on chip (SoC) device. 10. The processing unit of claim 1, further comprising a display coupled to the processing, the display configured to display the final composed image. 11. A processing unit, comprising: a memory unit, the memory unit including a first memory buffer and a second memory buffer; buffer switch module coupled to the memory unit; a first display control unit coupled to the memory unit, wherein the first display control unit is configured to receive a first plurality of graphical layers and blend the first plurality of graphical layers together to generate first composed images, the first display control unit further configured to selectively write each of the first composed images to either the first memory buffer or the second memory buffer as selected by the buffer switch module; and a second display control unit coupled to the memory unit, wherein the second display control unit is configured to selectively read each of the first composed images from either the first memory buffer or the second memory buffer as selected by the buffer switch module, the second display control unit further configured to receive a second plurality of graphical layers and blend the second plurality of graphical layers together with the first composed image to generate second composed images, the second display control unit further configured to output the second composed images to a display interface. 12. The processing unit of claim 11, wherein the processing unit is integrated with at least one general purpose processor on a system on chip (SoC) device. 13. The processing unit of claim 12, wherein the buffer switch module is implemented with the general purpose processor. 14. A method, comprising: blending a first plurality of graphical layers to generate a first composed image; writing the first composed image to a memory; reading the first composed image from the memory; and blending a second plurality of graphical layers with the first composed image to generate a final composed image. 15. The method of claim 14, wherein the writing the first composed image to the memory comprises selectively writing to one of a first memory buffer and a second memory buffer. 16. The method of claim 15, wherein the reading the first composed image from the memory comprises selectively reading from one of the first memory buffer and the second memory buffer. 17. The method of claim 16, wherein the selectively reading and selectively writing comprises alternating the writing of the first composed image between the first memory buffer and the second memory buffer, and alternating in opposite fashion the reading of the first composed image between the second memory buffer and the first memory buffer. 18. The method of claim 14, wherein the first plurality of graphical layers comprises M layers, and wherein the second plurality of graphics layers comprise N layers, and wherein the final composed image can include data from M+N−1 layers. 19. The method of claim 14, further comprising receiving the first plurality of graphical layers via direct memory access (DMA) channels. 20. The method of claim 14, further comprising outputting the final composed image to a display.
2,600
10,245
10,245
15,782,778
2,696
An apparatus is provided. The apparatus has a first member, a second member, and a third member. The third member couples the first member and the second member such that the second member is parallel to the first member, the third member is perpendicular to the first member and the second member, and the third member is in between the first member and the second member. Further, the apparatus has at least one slot integrated within a rear portion of the third member. In addition, the apparatus has an adjustable member positioned on a front portion of the third member that adheres a mobile computing device to the front portion of the third member. The adjustable member is adjusted within the at least one slot.
1. An apparatus comprising: a first member; a second member; a third member that couples the first member and the second member such that the second member is parallel to the first member, the third member is perpendicular to the first member and the second member, and the third member is in between the first member and the second member; at least one slot integrated within a rear portion of the third member; and an adjustable member positioned on a front portion of the third member that adheres a mobile computing device to the front portion of the third member, the adjustable member being adjusted within the at least one slot. 2. The apparatus of claim 1, wherein the adjustable member further comprises a knob positioned on the rear portion of the third member that secures the adjustable member to the mobile computing device on the front portion of the third member. 3. The apparatus of claim 1, wherein the adjustable member comprises a gripping member positioned on the front portion of the third member. 4. The apparatus of claim 1, wherein the adjustable member comprises a clip member positioned on the front portion of the third member. 5. The apparatus of claim 1, wherein the adjustable member comprises a hook member positioned on the front portion of the third member. 6. The apparatus of claim 1, wherein the adjustable member adjusts to the horizontal dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 7. The apparatus of claim 1, wherein the adjustable member adjusts to the vertical dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 8. The apparatus of claim 1, wherein the third member is configured so that a lens of the mobile computing device is positioned to a side of the third member. 9. The apparatus of claim 1, wherein the third member comprises a border portion that surrounds a cavity so that a lens of the mobile computing device is positioned in a center portion of the third member. 10. The apparatus of claim 1, further comprising a handle positioned perpendicular to, and in between, the first member and the second member. 11. An apparatus comprising: a first member; a second member; a third member that couples the first member and the second member such that the second member is parallel to the first member, the third member is perpendicular to the first member and the second member, and the third member is in between the first member and the second member; at least one slot integrated within a rear portion of the third member; and an adjustable member positioned on a front portion of the third member that adheres a receptacle for a mobile computing device to the front portion of the third member, the adjustable member being adjusted within the at least one slot. 12. The apparatus of claim 11, wherein the adjustable member further comprises a knob positioned on the rear portion of the third member that secures the adjustable member to the receptacle of mobile computing device on the front portion of the third member. 13. The apparatus of claim 11, wherein the adjustable member comprises a gripping member positioned on the front portion of the third member. 14. The apparatus of claim 11, wherein the adjustable member comprises a clip member positioned on the front portion of the third member. 15. The apparatus of claim 11, wherein the adjustable member comprises a hook member positioned on the front portion of the third member. 16. The apparatus of claim 11, wherein the adjustable member adjusts to the horizontal dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 17. The apparatus of claim 1, wherein the adjustable member adjusts to the vertical dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 18. The apparatus of claim 1, wherein the third member is configured so that a lens of the mobile computing device is positioned to a side of the third member. 19. The apparatus of claim 1, wherein the third member comprises a border portion that surrounds a cavity so that a lens of the mobile computing device is positioned in a center portion of the third member. 20. The apparatus of claim 1, further comprising a handle positioned perpendicular to, and in between, the first member and the second member.
An apparatus is provided. The apparatus has a first member, a second member, and a third member. The third member couples the first member and the second member such that the second member is parallel to the first member, the third member is perpendicular to the first member and the second member, and the third member is in between the first member and the second member. Further, the apparatus has at least one slot integrated within a rear portion of the third member. In addition, the apparatus has an adjustable member positioned on a front portion of the third member that adheres a mobile computing device to the front portion of the third member. The adjustable member is adjusted within the at least one slot.1. An apparatus comprising: a first member; a second member; a third member that couples the first member and the second member such that the second member is parallel to the first member, the third member is perpendicular to the first member and the second member, and the third member is in between the first member and the second member; at least one slot integrated within a rear portion of the third member; and an adjustable member positioned on a front portion of the third member that adheres a mobile computing device to the front portion of the third member, the adjustable member being adjusted within the at least one slot. 2. The apparatus of claim 1, wherein the adjustable member further comprises a knob positioned on the rear portion of the third member that secures the adjustable member to the mobile computing device on the front portion of the third member. 3. The apparatus of claim 1, wherein the adjustable member comprises a gripping member positioned on the front portion of the third member. 4. The apparatus of claim 1, wherein the adjustable member comprises a clip member positioned on the front portion of the third member. 5. The apparatus of claim 1, wherein the adjustable member comprises a hook member positioned on the front portion of the third member. 6. The apparatus of claim 1, wherein the adjustable member adjusts to the horizontal dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 7. The apparatus of claim 1, wherein the adjustable member adjusts to the vertical dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 8. The apparatus of claim 1, wherein the third member is configured so that a lens of the mobile computing device is positioned to a side of the third member. 9. The apparatus of claim 1, wherein the third member comprises a border portion that surrounds a cavity so that a lens of the mobile computing device is positioned in a center portion of the third member. 10. The apparatus of claim 1, further comprising a handle positioned perpendicular to, and in between, the first member and the second member. 11. An apparatus comprising: a first member; a second member; a third member that couples the first member and the second member such that the second member is parallel to the first member, the third member is perpendicular to the first member and the second member, and the third member is in between the first member and the second member; at least one slot integrated within a rear portion of the third member; and an adjustable member positioned on a front portion of the third member that adheres a receptacle for a mobile computing device to the front portion of the third member, the adjustable member being adjusted within the at least one slot. 12. The apparatus of claim 11, wherein the adjustable member further comprises a knob positioned on the rear portion of the third member that secures the adjustable member to the receptacle of mobile computing device on the front portion of the third member. 13. The apparatus of claim 11, wherein the adjustable member comprises a gripping member positioned on the front portion of the third member. 14. The apparatus of claim 11, wherein the adjustable member comprises a clip member positioned on the front portion of the third member. 15. The apparatus of claim 11, wherein the adjustable member comprises a hook member positioned on the front portion of the third member. 16. The apparatus of claim 11, wherein the adjustable member adjusts to the horizontal dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 17. The apparatus of claim 1, wherein the adjustable member adjusts to the vertical dimensions of the mobile computing device so that the mobile computing device is adhered to the third member. 18. The apparatus of claim 1, wherein the third member is configured so that a lens of the mobile computing device is positioned to a side of the third member. 19. The apparatus of claim 1, wherein the third member comprises a border portion that surrounds a cavity so that a lens of the mobile computing device is positioned in a center portion of the third member. 20. The apparatus of claim 1, further comprising a handle positioned perpendicular to, and in between, the first member and the second member.
2,600
10,246
10,246
15,349,153
2,624
Systems, apparatuses and methods for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system can include a sensor for detecting a posture of the user of the virtual reality system and a display for outputting data indicative of a field of view according to the user's posture. When the sensor detects a user's posture, the virtual reality system can display an alternate field of view that indicate to the user that the posture has been detected. When the sensor no longer detects the posture, the virtual reality system can display the original field of view again.
1. A method for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system, the method comprising: outputting a field of view to a display of the virtual reality system; detecting, using a sensor of the virtual reality system, a posture of the user; and altering the field of view in response to the detecting. 2. The method of claim 1, wherein altering the field of view includes changing the field of view to urge the user to change the detected posture of the user. 3. The method of claim 1, wherein the field of view is determined by a computing device of the virtual reality system independently of the posture of the user. 4. The method of claim 1, wherein the sensor includes one or more video cameras. 5. The method of claim 1, wherein the sensor includes one or more accelerometers. 6. The method of claim 1, wherein the sensor includes one or more three dimensional sensors. 7. The method of claim 1, wherein the posture of the user detected using the sensor is predetermined for the user. 8. A system for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system, the system comprising: a sensor of the virtual reality system, wherein the sensor detects a posture of the user of the virtual reality system; and a display of the virtual reality system, wherein the display outputs data indicative of a field of view according to the posture. 9. The system of claim 8, wherein the display outputs data indicative of the field of view independently of the posture of the user when the sensor does not detect a posture of the user. 10. The system of claim 8, wherein the sensor includes one or more video cameras. 11. The system of claim 8, wherein the sensor includes one or more accelerometers. 12. The system of claim 8, wherein the sensor includes one or more three dimensional sensors. 13. The system of claim 8, wherein the posture of the user detected using the sensor is predetermined for the user. 14. An apparatus for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system, the apparatus comprising: a computing device comprising: a memory; and a processor configured to execute instructions stored in the memory to: transmit data indicative of a first field of view; receive data indicative of a posture of the user from a sensor of the virtual reality system; and transmit data indicative of a second field of view in response to the data indicative of the posture of the user. 15. The apparatus of claim 14, wherein the second field of view includes changing the first field of view to urge the user to change the posture of the user. 16. The apparatus of claim 14, wherein the field of view is determined by the computing device independently of the posture of the user. 17. The apparatus of claim 14 wherein the sensor includes one or more video cameras. 18. The apparatus of claim 14 wherein the sensor includes one or more accelerometers. 19. The apparatus of claim 14 wherein the sensor includes one or more three dimensional sensors. 20. The apparatus of claim 14, wherein the first field of view depicts an object viewed by the user; and wherein the processor is further configured to execute instructions stored in the memory to generate the second field of view in which the object appears to the user to be repositioned, relative to the first field of view, to urge the user to change the posture of the user.
Systems, apparatuses and methods for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system can include a sensor for detecting a posture of the user of the virtual reality system and a display for outputting data indicative of a field of view according to the user's posture. When the sensor detects a user's posture, the virtual reality system can display an alternate field of view that indicate to the user that the posture has been detected. When the sensor no longer detects the posture, the virtual reality system can display the original field of view again.1. A method for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system, the method comprising: outputting a field of view to a display of the virtual reality system; detecting, using a sensor of the virtual reality system, a posture of the user; and altering the field of view in response to the detecting. 2. The method of claim 1, wherein altering the field of view includes changing the field of view to urge the user to change the detected posture of the user. 3. The method of claim 1, wherein the field of view is determined by a computing device of the virtual reality system independently of the posture of the user. 4. The method of claim 1, wherein the sensor includes one or more video cameras. 5. The method of claim 1, wherein the sensor includes one or more accelerometers. 6. The method of claim 1, wherein the sensor includes one or more three dimensional sensors. 7. The method of claim 1, wherein the posture of the user detected using the sensor is predetermined for the user. 8. A system for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system, the system comprising: a sensor of the virtual reality system, wherein the sensor detects a posture of the user of the virtual reality system; and a display of the virtual reality system, wherein the display outputs data indicative of a field of view according to the posture. 9. The system of claim 8, wherein the display outputs data indicative of the field of view independently of the posture of the user when the sensor does not detect a posture of the user. 10. The system of claim 8, wherein the sensor includes one or more video cameras. 11. The system of claim 8, wherein the sensor includes one or more accelerometers. 12. The system of claim 8, wherein the sensor includes one or more three dimensional sensors. 13. The system of claim 8, wherein the posture of the user detected using the sensor is predetermined for the user. 14. An apparatus for outputting fields of view of a virtual reality system according to postures of a user of the virtual reality system, the apparatus comprising: a computing device comprising: a memory; and a processor configured to execute instructions stored in the memory to: transmit data indicative of a first field of view; receive data indicative of a posture of the user from a sensor of the virtual reality system; and transmit data indicative of a second field of view in response to the data indicative of the posture of the user. 15. The apparatus of claim 14, wherein the second field of view includes changing the first field of view to urge the user to change the posture of the user. 16. The apparatus of claim 14, wherein the field of view is determined by the computing device independently of the posture of the user. 17. The apparatus of claim 14 wherein the sensor includes one or more video cameras. 18. The apparatus of claim 14 wherein the sensor includes one or more accelerometers. 19. The apparatus of claim 14 wherein the sensor includes one or more three dimensional sensors. 20. The apparatus of claim 14, wherein the first field of view depicts an object viewed by the user; and wherein the processor is further configured to execute instructions stored in the memory to generate the second field of view in which the object appears to the user to be repositioned, relative to the first field of view, to urge the user to change the posture of the user.
2,600
10,247
10,247
15,135,749
2,699
There is provided a solid-state image sensor including pixels each at least including light receiving parts receiving light to generate charge, a transfer part transferring the charge accumulated in the light receiving parts, and memory parts holding the charge transferred via the transfer part, and a predetermined number of elements shared by the plurality of pixels, the predetermined number of elements being for outputting a pixel signal at a level corresponding to the charge, wherein one or some of the plurality of pixels is/are a correction pixel(s) outputting a correction pixel signal used for correcting a pixel signal outputted from pixels other than the one or some of the plurality of pixels, and one or some of the predetermined number of elements is/are formed on a wiring layer side of the light receiving parts included in the correction pixel(s).
1. An image sensor comprising: a plurality of pixels, at least one of the pixels including: a first photoelectric conversion part; a second photoelectric conversion part; a third photoelectric conversion part; a fourth photoelectric conversion part; an amplification transistor; a reset transistor; and a selection transistor, wherein each of the amplification transistor, the reset transistor, and the selection transistor is shared by the first photoelectric conversion part, the second photoelectric conversion part, the third photoelectric conversion part, and the fourth photoelectric conversion part, and wherein the first photoelectric conversion part overlaps a gate electrode of the amplification transistor, a gate electrode of the reset transistor, and a gate electrode of the selection transistor. 2. The image sensor according to claim 1, wherein each of the second photoelectric conversion part, the third photoelectric conversion part, and the fourth photoelectric conversion part do not overlap the gate electrode of the amplification transistor, the gate electrode of the reset transistor, and the gate electrode of the selection transistor. 3. An image sensor comprising: a plurality of pixels, at least one of the pixels including: a first photoelectric conversion part; a second photoelectric conversion part; an amplification transistor; a reset transistor; and a selection transistor, wherein each of the amplification transistor, the reset transistor, and the selection transistor is shared by the first photoelectric conversion part and the second photoelectric conversion part, and wherein the first photoelectric conversion part overlaps a gate electrode of the amplification transistor, a gate electrode of the reset transistor, and a gate electrode of the selection transistor. 4. The image sensor according to claim 3, wherein the second photoelectric conversion part does not overlap the gate electrode of the amplification transistor, the gate electrode of the reset transistor, and the gate electrode of the selection transistor.
There is provided a solid-state image sensor including pixels each at least including light receiving parts receiving light to generate charge, a transfer part transferring the charge accumulated in the light receiving parts, and memory parts holding the charge transferred via the transfer part, and a predetermined number of elements shared by the plurality of pixels, the predetermined number of elements being for outputting a pixel signal at a level corresponding to the charge, wherein one or some of the plurality of pixels is/are a correction pixel(s) outputting a correction pixel signal used for correcting a pixel signal outputted from pixels other than the one or some of the plurality of pixels, and one or some of the predetermined number of elements is/are formed on a wiring layer side of the light receiving parts included in the correction pixel(s).1. An image sensor comprising: a plurality of pixels, at least one of the pixels including: a first photoelectric conversion part; a second photoelectric conversion part; a third photoelectric conversion part; a fourth photoelectric conversion part; an amplification transistor; a reset transistor; and a selection transistor, wherein each of the amplification transistor, the reset transistor, and the selection transistor is shared by the first photoelectric conversion part, the second photoelectric conversion part, the third photoelectric conversion part, and the fourth photoelectric conversion part, and wherein the first photoelectric conversion part overlaps a gate electrode of the amplification transistor, a gate electrode of the reset transistor, and a gate electrode of the selection transistor. 2. The image sensor according to claim 1, wherein each of the second photoelectric conversion part, the third photoelectric conversion part, and the fourth photoelectric conversion part do not overlap the gate electrode of the amplification transistor, the gate electrode of the reset transistor, and the gate electrode of the selection transistor. 3. An image sensor comprising: a plurality of pixels, at least one of the pixels including: a first photoelectric conversion part; a second photoelectric conversion part; an amplification transistor; a reset transistor; and a selection transistor, wherein each of the amplification transistor, the reset transistor, and the selection transistor is shared by the first photoelectric conversion part and the second photoelectric conversion part, and wherein the first photoelectric conversion part overlaps a gate electrode of the amplification transistor, a gate electrode of the reset transistor, and a gate electrode of the selection transistor. 4. The image sensor according to claim 3, wherein the second photoelectric conversion part does not overlap the gate electrode of the amplification transistor, the gate electrode of the reset transistor, and the gate electrode of the selection transistor.
2,600
10,248
10,248
14,882,717
2,628
A touchscreen apparatus for styling a content is provided. The touchscreen apparatus for drawing a figure content includes an input device receiving a touch input of a figure from a user and receiving an input related to configuration information of the figure, a processor configured to reconstruct the figure, which is pre-stored, based on at least one of the received touch input and the received configuration information of the figure to obtain a reconstructed figure, and a display configured to display the reconstructed figure on a screen, wherein the configuration information of the figure includes at least one of information indicating a length of the figure and information indicating an angle of the figure.
1. A method of drawing a figure content on a apparatus, the method comprising: receiving a touch input of a figure from a user; receiving an input related to configuration information of the figure; reconstructing a figure, which is pre-stored, based on at least one of the received touch input and the received configuration information of the figure to form a reconstructed figure; and displaying the reconstructed figure on a touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 2. The method of claim 1, wherein the received touch input of the figure comprises at least one of a swipe input that draws a side of a figure to be drawn and a touch input that selects a figure that is pre-stored in the apparatus. 3. The method of claim 1, further comprising displaying information of the touch input on the touchscreen. 4. The method of claim 1, further comprising, when it is determined that the figure that is pre-stored is not reconstructable based on at least one of the received touch input and the received configuration information of the figure, displaying an object indicating that no figure is drawn on the touchscreen. 5. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the configuration information of the figure is length information, receiving the input related to the configuration information of the figure on a side of the figure or in an area within a predetermined distance from the side of the figure. 6. The method of claim 5, wherein the length information comprises at least one of a value of a length of the side of the figure and a preset object indicating a relative length to other sides. 7. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the figure is a circular figure, receiving an input related to an auxiliary line indicating a diameter or a radius of the circular figure and then receiving length information indicating the diameter or the radius of the circular figure. 8. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the figure is a solid figure, receiving an input related to an auxiliary line indicating a height of the solid figure and then receiving length information indicating the height of the solid figure. 9. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the configuration information of the figure is angle information, receiving the input related to the configuration information of the figure in an area with a predetermined distance from a contact point at which two sides constituting the angle information meet each other. 10. The method of claim 9, wherein the angle information comprises at least one of a value of an angle and a preset angle marking object. 11. A method of editing a figure content on a touchscreen apparatus, the method comprising: displaying a figure on a touchscreen; receiving an input related to configuration information of the figure from a user; reconstructing the figure displayed on the touchscreen based on the received configuration information of the figure to obtain a reconstructed figure; and displaying the reconstructed figure on the touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 12. The method of claim 11, wherein the received input related to the configuration information of the figure comprises at least one of a swipe input of a side of a figure to be edited and a touch input that selects a figure that is pre-stored in the touchscreen apparatus. 13. The method of claim 11, further comprising receiving a touch input that eliminates a part or the whole of the configuration information of the figure displayed on the touchscreen. 14. The method of claim 11, wherein when it is determined that the pre-stored figure is not reconstructable based on the received configuration information of the figure, the method comprises displaying an object indicating that no figure is drawn on the touchscreen. 15. The method of claim 11, wherein the receiving of the input related to the configuration information of the figure comprises, when the configuration information of the figure is length information, receiving the input related to the configuration information of the figure on a side of the figure or in an area within a predetermined distance from the side of the figure. 16. The method of claim 15, wherein the length information comprises at least one of a value of a length of the side of the figure and a preset object indicating a relative length to other sides. 17. The method of claim 11, wherein the receiving of the input related to the configuration information of the figure comprises, when the figure is a circular figure, receiving an input related to an auxiliary line indicating a diameter or a radius of the circular figure and then receiving length information indicating the diameter or the radius of the circular figure. 18. The method of claim 11, wherein the receiving of the input the configuration information of the figure comprises, when the figure is a solid figure, receiving an input an auxiliary line indicating a height of the solid figure and then receiving length information indicating the height of the solid figure. 19. The method of claim 11, wherein the receiving of the input the configuration information of the figure comprises, when the configuration information of the figure is angle information, receiving the input related to the configuration information of the figure in an area within a predetermined distance from a contact point at which two sides constituting the angle information meet each other. 20. The method of claim 19, wherein the angle information comprises at least one of a value of an angle and a preset angle marking object. 21. A method of displaying information related to a figure content on a touchscreen, the method comprising: displaying a figure on a touchscreen; receiving an input that requests for configuration information of the figure; calculating the requested configuration information of the figure by using a preset formula; and displaying the calculated configuration information of the figure on the touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 22. The method of claim 21, wherein the requested configuration information of the figure comprises at least one of length information, angle information, area information, and volume information of the figure. 23. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises receiving an input related to a request object or preset request text. 24. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the requested configuration information of the figure is length information, receiving the input that requests for the configuration information of the figure on a side of the figure or in an area within a predetermined distance from the side of the figure. 25. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the requested figure is a circular figure, receiving an input related to an auxiliary line indicating a diameter or a radius of the circular figure and then receiving an input that requests for length information indicating the diameter or the radius of the circular figure. 26. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the requested figure is a solid figure, receiving an input related to an auxiliary line indicating a height of the solid figure and then receiving an input that requests for length information indicating the height of the solid figure. 27. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the configuration information of the figure is angle information, receiving the input that requests for the configuration information of the figure in an area within a predetermined distance from a contact point at which two sides constituting the angle information meet each other. 28. A touchscreen apparatus for drawing a figure content, the touchscreen apparatus comprising: an input device configured to receive a touch input of a figure from a user and receive an input related to configuration information of the figure; a processor configured to reconstruct the figure, which is pre-stored, based on at least one of the received touch input and the received configuration information of the figure to obtain a reconstructed figure; and a display configured to display the reconstructed figure on a screen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 29. A touchscreen apparatus for editing a figure content, the touchscreen apparatus comprising: an input device configured to receive an input related to configuration information of a figure from a user; a processor configured to reconstruct the figure displayed on a touchscreen based on the received configuration information of the figure to obtain a reconstructed figure; and a display configured to display the reconstructed figure on the touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 30. A touchscreen apparatus for displaying information related to a figure content, the touchscreen apparatus comprising: an input device configured to receive an input that requests for configuration information of a figure; a processor configured to calculate the requested configuration information of the figure by using a preset formula; and a display configured to display the calculated configuration information of the figure on a touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 31. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a device with a touchscreen, cause the device to execute the method of claim 1.
A touchscreen apparatus for styling a content is provided. The touchscreen apparatus for drawing a figure content includes an input device receiving a touch input of a figure from a user and receiving an input related to configuration information of the figure, a processor configured to reconstruct the figure, which is pre-stored, based on at least one of the received touch input and the received configuration information of the figure to obtain a reconstructed figure, and a display configured to display the reconstructed figure on a screen, wherein the configuration information of the figure includes at least one of information indicating a length of the figure and information indicating an angle of the figure.1. A method of drawing a figure content on a apparatus, the method comprising: receiving a touch input of a figure from a user; receiving an input related to configuration information of the figure; reconstructing a figure, which is pre-stored, based on at least one of the received touch input and the received configuration information of the figure to form a reconstructed figure; and displaying the reconstructed figure on a touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 2. The method of claim 1, wherein the received touch input of the figure comprises at least one of a swipe input that draws a side of a figure to be drawn and a touch input that selects a figure that is pre-stored in the apparatus. 3. The method of claim 1, further comprising displaying information of the touch input on the touchscreen. 4. The method of claim 1, further comprising, when it is determined that the figure that is pre-stored is not reconstructable based on at least one of the received touch input and the received configuration information of the figure, displaying an object indicating that no figure is drawn on the touchscreen. 5. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the configuration information of the figure is length information, receiving the input related to the configuration information of the figure on a side of the figure or in an area within a predetermined distance from the side of the figure. 6. The method of claim 5, wherein the length information comprises at least one of a value of a length of the side of the figure and a preset object indicating a relative length to other sides. 7. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the figure is a circular figure, receiving an input related to an auxiliary line indicating a diameter or a radius of the circular figure and then receiving length information indicating the diameter or the radius of the circular figure. 8. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the figure is a solid figure, receiving an input related to an auxiliary line indicating a height of the solid figure and then receiving length information indicating the height of the solid figure. 9. The method of claim 1, wherein the receiving of the input related to the configuration information of the figure further comprises, when the configuration information of the figure is angle information, receiving the input related to the configuration information of the figure in an area with a predetermined distance from a contact point at which two sides constituting the angle information meet each other. 10. The method of claim 9, wherein the angle information comprises at least one of a value of an angle and a preset angle marking object. 11. A method of editing a figure content on a touchscreen apparatus, the method comprising: displaying a figure on a touchscreen; receiving an input related to configuration information of the figure from a user; reconstructing the figure displayed on the touchscreen based on the received configuration information of the figure to obtain a reconstructed figure; and displaying the reconstructed figure on the touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 12. The method of claim 11, wherein the received input related to the configuration information of the figure comprises at least one of a swipe input of a side of a figure to be edited and a touch input that selects a figure that is pre-stored in the touchscreen apparatus. 13. The method of claim 11, further comprising receiving a touch input that eliminates a part or the whole of the configuration information of the figure displayed on the touchscreen. 14. The method of claim 11, wherein when it is determined that the pre-stored figure is not reconstructable based on the received configuration information of the figure, the method comprises displaying an object indicating that no figure is drawn on the touchscreen. 15. The method of claim 11, wherein the receiving of the input related to the configuration information of the figure comprises, when the configuration information of the figure is length information, receiving the input related to the configuration information of the figure on a side of the figure or in an area within a predetermined distance from the side of the figure. 16. The method of claim 15, wherein the length information comprises at least one of a value of a length of the side of the figure and a preset object indicating a relative length to other sides. 17. The method of claim 11, wherein the receiving of the input related to the configuration information of the figure comprises, when the figure is a circular figure, receiving an input related to an auxiliary line indicating a diameter or a radius of the circular figure and then receiving length information indicating the diameter or the radius of the circular figure. 18. The method of claim 11, wherein the receiving of the input the configuration information of the figure comprises, when the figure is a solid figure, receiving an input an auxiliary line indicating a height of the solid figure and then receiving length information indicating the height of the solid figure. 19. The method of claim 11, wherein the receiving of the input the configuration information of the figure comprises, when the configuration information of the figure is angle information, receiving the input related to the configuration information of the figure in an area within a predetermined distance from a contact point at which two sides constituting the angle information meet each other. 20. The method of claim 19, wherein the angle information comprises at least one of a value of an angle and a preset angle marking object. 21. A method of displaying information related to a figure content on a touchscreen, the method comprising: displaying a figure on a touchscreen; receiving an input that requests for configuration information of the figure; calculating the requested configuration information of the figure by using a preset formula; and displaying the calculated configuration information of the figure on the touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 22. The method of claim 21, wherein the requested configuration information of the figure comprises at least one of length information, angle information, area information, and volume information of the figure. 23. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises receiving an input related to a request object or preset request text. 24. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the requested configuration information of the figure is length information, receiving the input that requests for the configuration information of the figure on a side of the figure or in an area within a predetermined distance from the side of the figure. 25. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the requested figure is a circular figure, receiving an input related to an auxiliary line indicating a diameter or a radius of the circular figure and then receiving an input that requests for length information indicating the diameter or the radius of the circular figure. 26. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the requested figure is a solid figure, receiving an input related to an auxiliary line indicating a height of the solid figure and then receiving an input that requests for length information indicating the height of the solid figure. 27. The method of claim 21, wherein the receiving of the input that requests for the configuration information of the figure comprises, when the configuration information of the figure is angle information, receiving the input that requests for the configuration information of the figure in an area within a predetermined distance from a contact point at which two sides constituting the angle information meet each other. 28. A touchscreen apparatus for drawing a figure content, the touchscreen apparatus comprising: an input device configured to receive a touch input of a figure from a user and receive an input related to configuration information of the figure; a processor configured to reconstruct the figure, which is pre-stored, based on at least one of the received touch input and the received configuration information of the figure to obtain a reconstructed figure; and a display configured to display the reconstructed figure on a screen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 29. A touchscreen apparatus for editing a figure content, the touchscreen apparatus comprising: an input device configured to receive an input related to configuration information of a figure from a user; a processor configured to reconstruct the figure displayed on a touchscreen based on the received configuration information of the figure to obtain a reconstructed figure; and a display configured to display the reconstructed figure on the touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 30. A touchscreen apparatus for displaying information related to a figure content, the touchscreen apparatus comprising: an input device configured to receive an input that requests for configuration information of a figure; a processor configured to calculate the requested configuration information of the figure by using a preset formula; and a display configured to display the calculated configuration information of the figure on a touchscreen, wherein the configuration information of the figure comprises at least one of information indicating a length of the figure and information indicating an angle of the figure. 31. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a device with a touchscreen, cause the device to execute the method of claim 1.
2,600
10,249
10,249
15,690,646
2,644
In one implementation, a communications satellite includes a main antenna system and a communications controller. The main antenna system is configured to send communications to and receive communications from one or more terrestrial terminal devices. The communications controller has a memory storing a plurality of terminal attribute sets, each of which specifies attributes for communicating with a corresponding class of terrestrial terminal devices. The communications controller is configured to receive a terminal class identifier from an active terrestrial terminal device, identify, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device, and control the communications satellite to communicate with the active terrestrial terminal device according to the attributes for communicating specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device.
1. A communications satellite comprising: a main antenna system configured to send communications to and receive communications from one or more terrestrial terminal devices in a satellite communications network; and a communications controller having a memory storing a plurality of terminal attribute sets, each terminal attribute set specifying physical properties of signals used in communicating with a corresponding class of terrestrial terminal devices, the communications controller configured to: receive, via the main antenna system, a terminal class identifier from an active terrestrial terminal device; identify, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device; and control the communications satellite to communicate with the active terrestrial terminal device using signals conforming to the physical properties specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 2. The communications satellite of claim 1, further comprising a crosslink antenna system configured to facilitate communications between the communications satellite and one or more other communications satellites within the satellite communications network. 3. The communications satellite of claim 1, wherein the communications controller is further configured to receive an update to the plurality of terminal attribute sets, the update comprising a new terminal attribute set for a new class of terrestrial terminal devices. 4. The communications satellite of claim 3, wherein the terminal class identifier received from the active terrestrial terminal device corresponds to the new terminal attribute set, and wherein the active terrestrial terminal device is designed to communicate using signals conforming to the physical properties specified in the new terminal attribute set. 5. The communications satellite of claim 1, wherein each of the plurality of terminal attribute sets specifies attributes for communicating with a corresponding class of terrestrial terminal devices, comprising: antenna attributes for the corresponding class of terrestrial terminal devices; and carriers supported by the corresponding class of terrestrial terminal devices. 6. The communications satellite of claim 4, wherein the new terminal attribute set corresponds to a new class of terrestrial terminal devices introduced to the satellite communications network after the launch of the communications satellite. 7. The communications satellite of claim 2, wherein the communications controller is further configured to process communications received from the active terrestrial terminal device as radio frequency signals, the radio frequency signals complying with one or more of the physical properties of signals used in communicating specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 8. The communications satellite of claim 7, wherein the communications controller is further configured to control the communications satellite to retransmit the communications received from the active terrestrial terminal device to a terrestrial earth terminal via one or more other communications satellites in the satellite communications network using the crosslink antenna system. 9. The communications satellite of claim 7, wherein the communications received from the active terminal device are voice communications. 10. The communications satellite of claim 7, wherein the communications received from the active terminal device are data communications. 11. The communications satellite of claim 5, wherein the antenna attributes for the corresponding class of terrestrial terminal devices comprise: an antenna gain-to-noise-temperature attribute for the corresponding class of terrestrial terminal devices; and an equivalent isotropically radiated power attribute for the corresponding class of terrestrial terminal devices. 12. The communications satellite of claim 5, wherein the carriers supported by the corresponding class of terrestrial terminal devices comprise supported carrier modulation schemes for the corresponding class of terrestrial terminal devices. 13. A method of operating a communications satellite, the method comprising: storing, in a memory associated with a communications controller of the communications satellite, a plurality of terminal attribute sets, each terminal attribute set specifying physical properties of signals used in communicating with a corresponding class of terrestrial terminal devices; receiving, via a main antenna system of the communications satellite configured to send communications to and receive communications from one or more terrestrial terminal devices in a satellite communications network, a terminal class identifier from an active terrestrial terminal device; identifying, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device; and controlling the communications satellite to communicate with the active terrestrial terminal device using signals conforming to the physical properties specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 14. The method of claim 13, further comprising: receiving an update to the plurality of terminal attribute sets at the communications controller, the update comprising a new terminal attribute set for a new class of terrestrial terminal devices; and storing the new attribute set in the memory. 15. The method of claim 14, wherein the terminal class identifier received from the active terrestrial terminal device corresponds to the new terminal attribute set, and wherein the active terrestrial terminal device is designed to communicate using signals conforming to the physical properties specified in the new terminal attribute set. 16. The method of claim 13, wherein each of the plurality of terminal attribute sets specifies attributes for communicating with a corresponding class of terrestrial terminal devices, comprising: antenna attributes for the corresponding class of terrestrial terminal devices; and carriers supported by the corresponding class of terrestrial terminal devices. 17. The method of claim 16, wherein the carriers supported by the corresponding class of terrestrial terminal devices comprise supported carrier modulation schemes for the corresponding class of terrestrial terminal devices. 18. The method of claim 13, further comprising: receiving communications from the active terrestrial terminal device as radio frequency signals that comply with one or more of the physical properties of signals used in communicating specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 19. The method of claim 18, further comprising: retransmitting the communications received from the active terrestrial terminal device to a terrestrial earth terminal via one or more other communications satellites in the satellite communications network. 20. A communications satellite comprising: a main antenna system configured to send communications to and receive communications from one or more terrestrial terminal devices in a satellite communications network; and a communications controller having a memory storing a plurality of terminal attribute sets, each terminal attribute set specifying physical properties of signals and satellite antenna equipment used in communicating with a corresponding class of terrestrial terminal devices including an antenna gain-to-noise temperature attribute, an equivalent isotropically radiated power attribute, and one or more supported digital modulation schemes for carrier signals for the corresponding class of terrestrial terminal devices, the communications controller configured to: receive, via the main antenna system, a terminal class identifier from an active terrestrial terminal device; identify, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device; and control the communications satellite to communicate with the active terrestrial terminal device according to the antenna gain-to-noise temperature attribute, the equivalent isotropically radiated power attribute, and a particular one of the supported digital modulation schemes for carrier signals specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device.
In one implementation, a communications satellite includes a main antenna system and a communications controller. The main antenna system is configured to send communications to and receive communications from one or more terrestrial terminal devices. The communications controller has a memory storing a plurality of terminal attribute sets, each of which specifies attributes for communicating with a corresponding class of terrestrial terminal devices. The communications controller is configured to receive a terminal class identifier from an active terrestrial terminal device, identify, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device, and control the communications satellite to communicate with the active terrestrial terminal device according to the attributes for communicating specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device.1. A communications satellite comprising: a main antenna system configured to send communications to and receive communications from one or more terrestrial terminal devices in a satellite communications network; and a communications controller having a memory storing a plurality of terminal attribute sets, each terminal attribute set specifying physical properties of signals used in communicating with a corresponding class of terrestrial terminal devices, the communications controller configured to: receive, via the main antenna system, a terminal class identifier from an active terrestrial terminal device; identify, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device; and control the communications satellite to communicate with the active terrestrial terminal device using signals conforming to the physical properties specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 2. The communications satellite of claim 1, further comprising a crosslink antenna system configured to facilitate communications between the communications satellite and one or more other communications satellites within the satellite communications network. 3. The communications satellite of claim 1, wherein the communications controller is further configured to receive an update to the plurality of terminal attribute sets, the update comprising a new terminal attribute set for a new class of terrestrial terminal devices. 4. The communications satellite of claim 3, wherein the terminal class identifier received from the active terrestrial terminal device corresponds to the new terminal attribute set, and wherein the active terrestrial terminal device is designed to communicate using signals conforming to the physical properties specified in the new terminal attribute set. 5. The communications satellite of claim 1, wherein each of the plurality of terminal attribute sets specifies attributes for communicating with a corresponding class of terrestrial terminal devices, comprising: antenna attributes for the corresponding class of terrestrial terminal devices; and carriers supported by the corresponding class of terrestrial terminal devices. 6. The communications satellite of claim 4, wherein the new terminal attribute set corresponds to a new class of terrestrial terminal devices introduced to the satellite communications network after the launch of the communications satellite. 7. The communications satellite of claim 2, wherein the communications controller is further configured to process communications received from the active terrestrial terminal device as radio frequency signals, the radio frequency signals complying with one or more of the physical properties of signals used in communicating specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 8. The communications satellite of claim 7, wherein the communications controller is further configured to control the communications satellite to retransmit the communications received from the active terrestrial terminal device to a terrestrial earth terminal via one or more other communications satellites in the satellite communications network using the crosslink antenna system. 9. The communications satellite of claim 7, wherein the communications received from the active terminal device are voice communications. 10. The communications satellite of claim 7, wherein the communications received from the active terminal device are data communications. 11. The communications satellite of claim 5, wherein the antenna attributes for the corresponding class of terrestrial terminal devices comprise: an antenna gain-to-noise-temperature attribute for the corresponding class of terrestrial terminal devices; and an equivalent isotropically radiated power attribute for the corresponding class of terrestrial terminal devices. 12. The communications satellite of claim 5, wherein the carriers supported by the corresponding class of terrestrial terminal devices comprise supported carrier modulation schemes for the corresponding class of terrestrial terminal devices. 13. A method of operating a communications satellite, the method comprising: storing, in a memory associated with a communications controller of the communications satellite, a plurality of terminal attribute sets, each terminal attribute set specifying physical properties of signals used in communicating with a corresponding class of terrestrial terminal devices; receiving, via a main antenna system of the communications satellite configured to send communications to and receive communications from one or more terrestrial terminal devices in a satellite communications network, a terminal class identifier from an active terrestrial terminal device; identifying, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device; and controlling the communications satellite to communicate with the active terrestrial terminal device using signals conforming to the physical properties specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 14. The method of claim 13, further comprising: receiving an update to the plurality of terminal attribute sets at the communications controller, the update comprising a new terminal attribute set for a new class of terrestrial terminal devices; and storing the new attribute set in the memory. 15. The method of claim 14, wherein the terminal class identifier received from the active terrestrial terminal device corresponds to the new terminal attribute set, and wherein the active terrestrial terminal device is designed to communicate using signals conforming to the physical properties specified in the new terminal attribute set. 16. The method of claim 13, wherein each of the plurality of terminal attribute sets specifies attributes for communicating with a corresponding class of terrestrial terminal devices, comprising: antenna attributes for the corresponding class of terrestrial terminal devices; and carriers supported by the corresponding class of terrestrial terminal devices. 17. The method of claim 16, wherein the carriers supported by the corresponding class of terrestrial terminal devices comprise supported carrier modulation schemes for the corresponding class of terrestrial terminal devices. 18. The method of claim 13, further comprising: receiving communications from the active terrestrial terminal device as radio frequency signals that comply with one or more of the physical properties of signals used in communicating specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device. 19. The method of claim 18, further comprising: retransmitting the communications received from the active terrestrial terminal device to a terrestrial earth terminal via one or more other communications satellites in the satellite communications network. 20. A communications satellite comprising: a main antenna system configured to send communications to and receive communications from one or more terrestrial terminal devices in a satellite communications network; and a communications controller having a memory storing a plurality of terminal attribute sets, each terminal attribute set specifying physical properties of signals and satellite antenna equipment used in communicating with a corresponding class of terrestrial terminal devices including an antenna gain-to-noise temperature attribute, an equivalent isotropically radiated power attribute, and one or more supported digital modulation schemes for carrier signals for the corresponding class of terrestrial terminal devices, the communications controller configured to: receive, via the main antenna system, a terminal class identifier from an active terrestrial terminal device; identify, from among the stored terminal attribute sets, a particular terminal attribute set as corresponding to the terminal class identifier received from the active terrestrial terminal device; and control the communications satellite to communicate with the active terrestrial terminal device according to the antenna gain-to-noise temperature attribute, the equivalent isotropically radiated power attribute, and a particular one of the supported digital modulation schemes for carrier signals specified in the particular terminal attribute set identified as corresponding to the terminal class identifier received from the active terrestrial terminal device.
2,600
10,250
10,250
14,361,743
2,627
The present invention relates to a measurement method in which, by predetermined illumination by means of a display device, in particular a holographic or autostereoscopic display device, with an intensity distribution of the illumination light in a plane of a light source image, a first location of an object, in particular an observer of the display device, is marked, and wherein the relative position of the first location in relation to a second location of the object is determined in a coordinate system of a camera.
1. A measurement method, wherein by predetermined illumination by means of a display device, in particular a holographic or autostereoscopic display device, with an intensity distribution of the illumination light in a plane of a light source image, a first location of an object, in particular an observer of the display device, is marked, and wherein the relative position of the first location in relation to a second location of the object is determined in a coordinate system of a camera. 2. The measurement method as claimed in claim 1, wherein the intensity distribution of the illumination light in the plane of the light source image comprises a light source image of a diffraction order. 3. The measurement method as claimed in claim 1, wherein the second location is an eye pupil of the observer, and the relative position of the first location in relation to the eye pupil of the observer is determined in the coordinate system of the camera. 4. The measurement method as claimed in claim 1, wherein the first location is brought to coincide with a predeterminable region of the face of the observer, in particular with the eye pupil of the observer, by variation of the predetermined illumination. 5. The measurement method as claimed in claim 1, wherein a viewing window of an image to be displayed to the observer is used as the intensity distribution of the illumination light in a plane of a light source image or as a light source image. 6. The measurement method as claimed in claim 1, wherein the second location of the object is defined by predetermined illumination by means of the display device with a second intensity distribution of the illumination light in a plane of a second light source image. 7. The measurement method as claimed in claim 6, wherein a predeterminable pattern is formed on the object, in particular the face of the observer, with the first and second intensity distributions of the illumination light, wherein an image of the pattern is recorded with the camera, and wherein the recorded image of the pattern is examined for differences from the predeterminable pattern. 8. The measurement method as claimed in claim 6, wherein a first diffraction order is used as the first light source image and a different diffraction order is used as the second light source image. 9. The measurement method as claimed in claim 1, wherein a calibrated object is used. 10. The measurement method as claimed in claim 6, wherein the coordinate system of the camera is calibrated in relation to a coordinate system of the display device from the relative position of the first location in relation to the second location in the coordinate system of the camera. 11. The measurement method as claimed in claim 1, wherein the camera is arranged at a predetermined distance and/or in a predetermined orientation with respect to the display device, and wherein the position of the second location in a coordinate system of the display device is determined from the relative position of the first location in relation to the second location in the coordinate system of the camera. 12. The measurement method as claimed in claim 1, wherein the first light source image and/or the second light source image is generated by an optical system of the display device and by predetermined illumination of a controllable spatial light modulator with light of a first visible wavelength and/or a second visible wavelength and/or a third visible wavelength and/or an infrared wavelength, and wherein the camera and/or a further camera is provided with a filter which is transmissive essentially only for light of the first visible wavelength and/or the second visible wavelength and/or the third visible wavelength and/or infrared wavelengths. 13. The measurement method as claimed claim 1, wherein the relative position of the first location in relation to the second location is determined in a second coordinate system of a second camera. 14. The measurement method as claimed in claim 4, wherein by predetermined illumination by means of a display device, in particular a holographic or autostereoscopic display device, with an intensity distribution of the illumination light in a plane of a light source image, a first location of an observer of the display device is marked, wherein a viewing window of an image to be displayed to the observer is used as the intensity distribution of the illumination light in a plane of a light source image or as a light source image, wherein the relative position of the first location in relation to the eye pupil of the observer is determined in the coordinate system of the camera, wherein the first location is brought to coincide with a predeterminable region of the face of the observer, in particular with the eye pupil of the observer, by variation of the predetermined illumination. 15. An apparatus for carrying out a measurement method as claimed in claim 1, wherein the apparatus comprises a display device, in particular a holographic display device or an autostereoscopic display device, a camera and an evaluation unit for determining the position of the first location in a coordinate system of the camera. 16. The apparatus as claimed in claim 15, wherein the camera comprises a CCD sensor or a CMOS sensor and/or wherein the camera is a color camera. 17. The apparatus as claimed in claim 15, which comprises a light source, wherein an intensity distribution of the illumination light in a plane of a light source image can be generated with the light source and the optical system. 18. The apparatus as claimed in claim 15, wherein the apparatus comprises a filter, wherein the filter is arranged in front of the first camera, and wherein the filter is transmissive essentially only for light of a first visible wavelength and/or a second visible wavelength and/or a third visible wavelength and/or infrared wavelength.
The present invention relates to a measurement method in which, by predetermined illumination by means of a display device, in particular a holographic or autostereoscopic display device, with an intensity distribution of the illumination light in a plane of a light source image, a first location of an object, in particular an observer of the display device, is marked, and wherein the relative position of the first location in relation to a second location of the object is determined in a coordinate system of a camera.1. A measurement method, wherein by predetermined illumination by means of a display device, in particular a holographic or autostereoscopic display device, with an intensity distribution of the illumination light in a plane of a light source image, a first location of an object, in particular an observer of the display device, is marked, and wherein the relative position of the first location in relation to a second location of the object is determined in a coordinate system of a camera. 2. The measurement method as claimed in claim 1, wherein the intensity distribution of the illumination light in the plane of the light source image comprises a light source image of a diffraction order. 3. The measurement method as claimed in claim 1, wherein the second location is an eye pupil of the observer, and the relative position of the first location in relation to the eye pupil of the observer is determined in the coordinate system of the camera. 4. The measurement method as claimed in claim 1, wherein the first location is brought to coincide with a predeterminable region of the face of the observer, in particular with the eye pupil of the observer, by variation of the predetermined illumination. 5. The measurement method as claimed in claim 1, wherein a viewing window of an image to be displayed to the observer is used as the intensity distribution of the illumination light in a plane of a light source image or as a light source image. 6. The measurement method as claimed in claim 1, wherein the second location of the object is defined by predetermined illumination by means of the display device with a second intensity distribution of the illumination light in a plane of a second light source image. 7. The measurement method as claimed in claim 6, wherein a predeterminable pattern is formed on the object, in particular the face of the observer, with the first and second intensity distributions of the illumination light, wherein an image of the pattern is recorded with the camera, and wherein the recorded image of the pattern is examined for differences from the predeterminable pattern. 8. The measurement method as claimed in claim 6, wherein a first diffraction order is used as the first light source image and a different diffraction order is used as the second light source image. 9. The measurement method as claimed in claim 1, wherein a calibrated object is used. 10. The measurement method as claimed in claim 6, wherein the coordinate system of the camera is calibrated in relation to a coordinate system of the display device from the relative position of the first location in relation to the second location in the coordinate system of the camera. 11. The measurement method as claimed in claim 1, wherein the camera is arranged at a predetermined distance and/or in a predetermined orientation with respect to the display device, and wherein the position of the second location in a coordinate system of the display device is determined from the relative position of the first location in relation to the second location in the coordinate system of the camera. 12. The measurement method as claimed in claim 1, wherein the first light source image and/or the second light source image is generated by an optical system of the display device and by predetermined illumination of a controllable spatial light modulator with light of a first visible wavelength and/or a second visible wavelength and/or a third visible wavelength and/or an infrared wavelength, and wherein the camera and/or a further camera is provided with a filter which is transmissive essentially only for light of the first visible wavelength and/or the second visible wavelength and/or the third visible wavelength and/or infrared wavelengths. 13. The measurement method as claimed claim 1, wherein the relative position of the first location in relation to the second location is determined in a second coordinate system of a second camera. 14. The measurement method as claimed in claim 4, wherein by predetermined illumination by means of a display device, in particular a holographic or autostereoscopic display device, with an intensity distribution of the illumination light in a plane of a light source image, a first location of an observer of the display device is marked, wherein a viewing window of an image to be displayed to the observer is used as the intensity distribution of the illumination light in a plane of a light source image or as a light source image, wherein the relative position of the first location in relation to the eye pupil of the observer is determined in the coordinate system of the camera, wherein the first location is brought to coincide with a predeterminable region of the face of the observer, in particular with the eye pupil of the observer, by variation of the predetermined illumination. 15. An apparatus for carrying out a measurement method as claimed in claim 1, wherein the apparatus comprises a display device, in particular a holographic display device or an autostereoscopic display device, a camera and an evaluation unit for determining the position of the first location in a coordinate system of the camera. 16. The apparatus as claimed in claim 15, wherein the camera comprises a CCD sensor or a CMOS sensor and/or wherein the camera is a color camera. 17. The apparatus as claimed in claim 15, which comprises a light source, wherein an intensity distribution of the illumination light in a plane of a light source image can be generated with the light source and the optical system. 18. The apparatus as claimed in claim 15, wherein the apparatus comprises a filter, wherein the filter is arranged in front of the first camera, and wherein the filter is transmissive essentially only for light of a first visible wavelength and/or a second visible wavelength and/or a third visible wavelength and/or infrared wavelength.
2,600
10,251
10,251
14,867,892
2,626
An electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a first user interface of a first software application, detects an input on the touch-sensitive surface while displaying the first user interface, and, in response to detecting the input while displaying the first user interface, performs a first operation in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period, and performs a second operation in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period.
1. A method, comprising: at an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface: displaying a first user interface; while displaying the first user interface, detecting a first portion of an input on the touch-sensitive surface that includes detecting a contact on the touch-sensitive surface; after at least the first portion of the input has been detected, monitoring progression of the input with a plurality of different gesture recognizers including a first gesture recognizer that is configured to recognize a tap gesture and a second gesture recognizer that is distinct from the first gesture recognizer and configured to recognize an intensity-based gesture; while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, and the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period: transitioning the second gesture recognizer into a state that indicates that the input has been recognized as an intensity-based gesture and performing a first operation in accordance with the transition of the second gesture recognizer into the state that indicates that the input has been recognized as an intensity-based gesture; and transitioning the first gesture recognizer into a state that indicates that the input will not be recognized by the first gesture recognizer; and, in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, transitioning the first gesture recognizer into a state that indicates that the input has been recognized as a tap gesture and performing a second operation that is distinct from the first operation in accordance with the transition of the first gesture recognizer into the state that indicates that the input has been recognized as a tap gesture. 2. The method of claim 1, including: in response to detecting the first portion of the input on the touch-sensitive surface, identifying a plurality of gesture recognizers that correspond to at least the first portion of the input as candidate gesture recognizers; in response to detecting the second portion of the input on the touch-sensitive surface: in accordance with the determination that the input satisfies the intensity input criteria, performing the first operation including processing the input with the first gesture recognizer; and, in accordance with the determination that the input satisfies the tap criteria, performing the second operation including processing the input with the second gesture recognizer. 3. The method of claim 1, wherein the first gesture recognizer is an intensity-based gesture recognizer and the second gesture recognizer is a tap gesture recognizer. 4. The method of claim 1, wherein the input includes a third portion of the input that is subsequent to the second portion of the input, and the method includes processing the third portion of the input with the first gesture recognizer. 5. The method of claim 1, including: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, transitioning the second gesture recognizer into a state that indicates that the input will not be recognized by the second gesture recognizer. 6. The method of claim 1, including: in response to determining that the input satisfies a second intensity threshold, processing the input with the first gesture recognizer, including replacing display of the first user interface with a second user interface. 7. The method of claim 2, wherein: the candidate gesture recognizers include a third gesture recognizer; and the method includes, in response to determining that the input satisfies a second intensity threshold, processing the input with the third gesture recognizer. 8. The method of claim 1, including: in response to detecting the first portion of the input, performing a third operation. 9. The method of claim 8, wherein performing the third operation includes visually distinguishing at least a portion of the first user interface from other portions of the first user interface. 10. The method of claim 8, including, subsequent to performing the third operation: in accordance with the determination that the input satisfies the intensity input criteria, performing the first operation; and, in accordance with the determination that the input satisfies the tap criteria, performing the second operation. 11. The method of claim 8, wherein the third operation is initiated during the first predefined time period. 12. The method of claim 1, wherein: performing the first operation includes displaying a preview area. 13. The method of claim 1, wherein: performing the second operation includes replacing display of the first user interface with a third user interface of a software application that corresponds to a location of the input on the touch-sensitive surface. 14. The method of claim 1, wherein the first intensity threshold is adjustable. 15. The method of claim 14, including updating the first gesture recognizer to be activated in response to the input satisfying a third intensity threshold that is distinct from the first intensity threshold. 16. The method of claim 1, wherein the second operation is performed in accordance with the determination that the input satisfies the tap criteria, regardless of whether the input satisfies the intensity input criteria. 17. The method of claim 1, including: in response to detecting the input while displaying the first user interface: in accordance with a determination that the input remains on the touch-sensitive surface for the first predefined time period followed by the input subsequently ceasing to be detected on the touch-sensitive surface and the input does not satisfy the intensity input criteria, performing the second operation; and, in accordance with a determination that the input remains on the touch-sensitive surface for the first predefined time period followed by the input subsequently ceasing to be detected on the touch-sensitive surface and the input satisfies the intensity input criteria, forgoing performance of the second operation. 18. An electronic device, comprising: a display; a touch-sensitive surface; one or more sensors to detect intensity of contacts with the touch-sensitive surface; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a first user interface; while displaying the first user interface, detecting a first portion of an input on the touch-sensitive surface that includes detecting a contact on the touch-sensitive surface; after at least the first portion of the input has been detected, monitoring progression of the input with a plurality of different gesture recognizers including a first gesture recognizer that is configured to recognize a tap gesture and a second gesture recognizer that is distinct from the first gesture recognizer and configured to recognize an intensity-based gesture; while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, and the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period: transitioning the second gesture recognizer into a state that indicates that the input has been recognized as an intensity-based gesture and performing a first operation in accordance with the transition of the second gesture recognizer into the state that indicates that the input has been recognized as an intensity-based gesture; and transitioning the first gesture recognizer into a state that indicates that the input will not be recognized by the first gesture recognizer; and, in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, transitioning the first gesture recognizer into a state that indicates that the input has been recognized as a tap gesture and performing a second operation that is distinct from the first operation in accordance with the transition of the first gesture recognizer into the state that indicates that the input has been recognized as a tap gesture. 19. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface cause the device to: display a first user interface; while displaying the first user interface, detect a first portion of an input on the touch-sensitive surface that includes detecting a contact on the touch-sensitive surface; after at least the first portion of the input has been detected, monitoring progression of the input with a plurality of different gesture recognizers including a first gesture recognizer that is configured to recognize a tap gesture and a second gesture recognizer that is distinct from the first gesture recognizer and configured to recognize an intensity-based gesture; while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, and the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period, performing a first operation; and, in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, performing a second operation that is distinct from the first operation. 20. The method of claim 1, wherein: the plurality of different gesture recognizers includes a third gesture recognizer that is configured to recognize a second intensity-based gesture; the method includes: while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, and the third gesture recognizer is in a state that indicates that the third gesture recognizer is capable of recognizing the input as a second intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies a fourth intensity threshold: transitioning the third gesture recognizer into a began state that indicates that the input has satisfied the fourth intensity threshold as a minimum intensity threshold for the third gesture recognizer; maintaining the first gesture recognizer in the state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture; and maintaining the second gesture recognizer in the state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture. 21. The method of claim 20, wherein: the first intensity threshold is greater than the fourth intensity threshold. 22. The method of claim 20, including: in response to detecting the second portion of the input while displaying the first user interface: subsequent to the determination that the input satisfies the fourth intensity threshold, in accordance with a determination that an intensity of the input has changed: transitioning the third gesture recognizer into a changed state that indicates that the intensity of the input has changed, wherein the changed state is distinct from the began state; maintaining the first gesture recognizer in the state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture; and maintaining the second gesture recognizer in the state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture. 23. The method of claim 22, including: in response to detecting the second portion of the input while displaying the first user interface: subsequent to the determination that the intensity of the input has changed, in accordance with a determination that the input has remained on the touch-sensitive surface for at least the first predefined time period: transitioning the first gesture recognizer into the state that indicates that the input will not be recognized by the first gesture recognizer; maintaining the second gesture recognizer in the state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture; and transitioning the third gesture recognizer into the changed state. 24. The method of claim 23, including: subsequent to the determination that the input has remained on the touch-sensitive surface for at least the first predefined time period, in accordance with a determination that the input ceases to remain on the touch-sensitive surface: transitioning the third gesture recognizer into a state that indicates that the input has been recognized as a second intensity-based gesture and performing a fourth operation; and maintaining the first gesture recognizer in the state that indicates that the input will not be recognized by the first gesture recognizer.
An electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface displays a first user interface of a first software application, detects an input on the touch-sensitive surface while displaying the first user interface, and, in response to detecting the input while displaying the first user interface, performs a first operation in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period, and performs a second operation in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period.1. A method, comprising: at an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface: displaying a first user interface; while displaying the first user interface, detecting a first portion of an input on the touch-sensitive surface that includes detecting a contact on the touch-sensitive surface; after at least the first portion of the input has been detected, monitoring progression of the input with a plurality of different gesture recognizers including a first gesture recognizer that is configured to recognize a tap gesture and a second gesture recognizer that is distinct from the first gesture recognizer and configured to recognize an intensity-based gesture; while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, and the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period: transitioning the second gesture recognizer into a state that indicates that the input has been recognized as an intensity-based gesture and performing a first operation in accordance with the transition of the second gesture recognizer into the state that indicates that the input has been recognized as an intensity-based gesture; and transitioning the first gesture recognizer into a state that indicates that the input will not be recognized by the first gesture recognizer; and, in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, transitioning the first gesture recognizer into a state that indicates that the input has been recognized as a tap gesture and performing a second operation that is distinct from the first operation in accordance with the transition of the first gesture recognizer into the state that indicates that the input has been recognized as a tap gesture. 2. The method of claim 1, including: in response to detecting the first portion of the input on the touch-sensitive surface, identifying a plurality of gesture recognizers that correspond to at least the first portion of the input as candidate gesture recognizers; in response to detecting the second portion of the input on the touch-sensitive surface: in accordance with the determination that the input satisfies the intensity input criteria, performing the first operation including processing the input with the first gesture recognizer; and, in accordance with the determination that the input satisfies the tap criteria, performing the second operation including processing the input with the second gesture recognizer. 3. The method of claim 1, wherein the first gesture recognizer is an intensity-based gesture recognizer and the second gesture recognizer is a tap gesture recognizer. 4. The method of claim 1, wherein the input includes a third portion of the input that is subsequent to the second portion of the input, and the method includes processing the third portion of the input with the first gesture recognizer. 5. The method of claim 1, including: in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, transitioning the second gesture recognizer into a state that indicates that the input will not be recognized by the second gesture recognizer. 6. The method of claim 1, including: in response to determining that the input satisfies a second intensity threshold, processing the input with the first gesture recognizer, including replacing display of the first user interface with a second user interface. 7. The method of claim 2, wherein: the candidate gesture recognizers include a third gesture recognizer; and the method includes, in response to determining that the input satisfies a second intensity threshold, processing the input with the third gesture recognizer. 8. The method of claim 1, including: in response to detecting the first portion of the input, performing a third operation. 9. The method of claim 8, wherein performing the third operation includes visually distinguishing at least a portion of the first user interface from other portions of the first user interface. 10. The method of claim 8, including, subsequent to performing the third operation: in accordance with the determination that the input satisfies the intensity input criteria, performing the first operation; and, in accordance with the determination that the input satisfies the tap criteria, performing the second operation. 11. The method of claim 8, wherein the third operation is initiated during the first predefined time period. 12. The method of claim 1, wherein: performing the first operation includes displaying a preview area. 13. The method of claim 1, wherein: performing the second operation includes replacing display of the first user interface with a third user interface of a software application that corresponds to a location of the input on the touch-sensitive surface. 14. The method of claim 1, wherein the first intensity threshold is adjustable. 15. The method of claim 14, including updating the first gesture recognizer to be activated in response to the input satisfying a third intensity threshold that is distinct from the first intensity threshold. 16. The method of claim 1, wherein the second operation is performed in accordance with the determination that the input satisfies the tap criteria, regardless of whether the input satisfies the intensity input criteria. 17. The method of claim 1, including: in response to detecting the input while displaying the first user interface: in accordance with a determination that the input remains on the touch-sensitive surface for the first predefined time period followed by the input subsequently ceasing to be detected on the touch-sensitive surface and the input does not satisfy the intensity input criteria, performing the second operation; and, in accordance with a determination that the input remains on the touch-sensitive surface for the first predefined time period followed by the input subsequently ceasing to be detected on the touch-sensitive surface and the input satisfies the intensity input criteria, forgoing performance of the second operation. 18. An electronic device, comprising: a display; a touch-sensitive surface; one or more sensors to detect intensity of contacts with the touch-sensitive surface; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a first user interface; while displaying the first user interface, detecting a first portion of an input on the touch-sensitive surface that includes detecting a contact on the touch-sensitive surface; after at least the first portion of the input has been detected, monitoring progression of the input with a plurality of different gesture recognizers including a first gesture recognizer that is configured to recognize a tap gesture and a second gesture recognizer that is distinct from the first gesture recognizer and configured to recognize an intensity-based gesture; while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, and the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period: transitioning the second gesture recognizer into a state that indicates that the input has been recognized as an intensity-based gesture and performing a first operation in accordance with the transition of the second gesture recognizer into the state that indicates that the input has been recognized as an intensity-based gesture; and transitioning the first gesture recognizer into a state that indicates that the input will not be recognized by the first gesture recognizer; and, in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, transitioning the first gesture recognizer into a state that indicates that the input has been recognized as a tap gesture and performing a second operation that is distinct from the first operation in accordance with the transition of the first gesture recognizer into the state that indicates that the input has been recognized as a tap gesture. 19. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface cause the device to: display a first user interface; while displaying the first user interface, detect a first portion of an input on the touch-sensitive surface that includes detecting a contact on the touch-sensitive surface; after at least the first portion of the input has been detected, monitoring progression of the input with a plurality of different gesture recognizers including a first gesture recognizer that is configured to recognize a tap gesture and a second gesture recognizer that is distinct from the first gesture recognizer and configured to recognize an intensity-based gesture; while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, and the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies intensity input criteria including that the input satisfies a first intensity threshold and the input remains on the touch-sensitive surface for a first predefined time period, performing a first operation; and, in accordance with a determination that the input satisfies tap criteria including that the input ceases to remain on the touch-sensitive surface during the first predefined time period, performing a second operation that is distinct from the first operation. 20. The method of claim 1, wherein: the plurality of different gesture recognizers includes a third gesture recognizer that is configured to recognize a second intensity-based gesture; the method includes: while the first gesture recognizer is in a state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture, the second gesture recognizer is in a state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture, and the third gesture recognizer is in a state that indicates that the third gesture recognizer is capable of recognizing the input as a second intensity-based gesture, detecting a second portion of the input that is subsequent to the first portion of the input and includes a change in intensity of the contact; and, in response to detecting the second portion of the input while displaying the first user interface: in accordance with a determination that the input satisfies a fourth intensity threshold: transitioning the third gesture recognizer into a began state that indicates that the input has satisfied the fourth intensity threshold as a minimum intensity threshold for the third gesture recognizer; maintaining the first gesture recognizer in the state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture; and maintaining the second gesture recognizer in the state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture. 21. The method of claim 20, wherein: the first intensity threshold is greater than the fourth intensity threshold. 22. The method of claim 20, including: in response to detecting the second portion of the input while displaying the first user interface: subsequent to the determination that the input satisfies the fourth intensity threshold, in accordance with a determination that an intensity of the input has changed: transitioning the third gesture recognizer into a changed state that indicates that the intensity of the input has changed, wherein the changed state is distinct from the began state; maintaining the first gesture recognizer in the state that indicates that the first gesture recognizer is capable of recognizing the input as a tap gesture; and maintaining the second gesture recognizer in the state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture. 23. The method of claim 22, including: in response to detecting the second portion of the input while displaying the first user interface: subsequent to the determination that the intensity of the input has changed, in accordance with a determination that the input has remained on the touch-sensitive surface for at least the first predefined time period: transitioning the first gesture recognizer into the state that indicates that the input will not be recognized by the first gesture recognizer; maintaining the second gesture recognizer in the state that indicates that the second gesture recognizer is capable of recognizing the input as an intensity-based gesture; and transitioning the third gesture recognizer into the changed state. 24. The method of claim 23, including: subsequent to the determination that the input has remained on the touch-sensitive surface for at least the first predefined time period, in accordance with a determination that the input ceases to remain on the touch-sensitive surface: transitioning the third gesture recognizer into a state that indicates that the input has been recognized as a second intensity-based gesture and performing a fourth operation; and maintaining the first gesture recognizer in the state that indicates that the input will not be recognized by the first gesture recognizer.
2,600
10,252
10,252
10,770,342
2,656
The present invention provides an FM Orthogonal Frequency Division Multiplexing (OFDM) modulation process that enable high-speed data communications over any transmission media and networks. The process is implemented with a modem device modulator and demodulator that provides communication with several other modem devices along any communication media that uses an FM OFDM modulation technique, a physical transmission medium such as power lines, or wireless (air), or cable, or twisted pairs communication media.
1-35. (canceled) 36. An apparatus for transmitting an orthogonal frequency division multiplex communication signal over a forward channel of an electrical power line comprising: a digital signal processor adapted to processing an incoming baseband signal and outputting an orthogonal frequency division multiplex output signal; and a first coupler operatively connected to an output of the digital signal processor, said coupler being adapted to providing the output signal from the digital signal processor to the power line, wherein the digital signal processor includes at least one of a code selection unit and a modulation selection unit, said at least one code selection unit and said modulation selection unit being capable of selecting respectively, a particular error correction code and a particular type of modulation based on a quality of the forward channel. 37. The apparatus of claim 36, wherein the first coupler is a capacitive transformer. 38. The apparatus of claim 36, wherein the first coupler is a resin filled transformer. 39. The apparatus of claim 36, wherein the first coupler is a dielectric filled transformer. 40. The apparatus of claim 36, wherein the first coupler matches the power line characteristic impedance to the first coupler and the first coupler matches the characteristic impedance of the transmitter to the coupler. 41. The apparatus of claim 36, further including a first frequency modulation modulator, said first frequency modulation modulator receiving the output of said digital signal processor and outputting a frequency modulated signal to the first coupler. 42. The apparatus of claim 36, wherein the output of the digital signal processor consists of an in-phase signal and a quadrature signal, the in-phase signal being provided to the first frequency modulator and the quadrature signal being provided to a second frequency modulation modulator, the second frequency modulation modulator outputting a frequency modulated output signal to a second coupler. 43. An apparatus for receiving an orthogonal frequency division multiplex communication signal transmitted over a forward channel of an electrical power line comprising: a first coupler adapted to connecting to the electrical power line and receiving the orthogonal frequency division multiplex communication signal from the electrical power line; and a digital signal processor operatively connected to an output of the first coupler, said digital signal processor being particularly adapted to processing the orthogonal frequency division multiplex communication signal and outputting a baseband signal, wherein the digital signal processor includes at least one of a decoder selection unit and a demodulation selection unit, said at least one of said decoder selection unit and said demodulation selection unit being capable of selecting respectively, a particular error correction code and a particular type of demodulation based on the quality of the forward channel. 44. The apparatus of claim 43, wherein the first coupler is a capacitive transformer. 45. The apparatus of claim 43, wherein the first coupler is a resin filled transformer. 46. The apparatus of claim 43, wherein the first coupler is a dielectric filled transformer. 47. The apparatus of claim 43, wherein the first coupler matches the power line characteristic impedance to the first coupler and the first coupler matches the characteristic impedance of the receiver to the coupler. 48. The apparatus of claim 43, further including a first frequency demodulator, said first frequency demodulator receiving the output of said first coupler and providing a frequency demodulated signal to the digital signal processor. 49. The apparatus of claim 43, further including a second coupler, wherein an output of the first coupler consists of an in-phase signal and an output of the second coupler consists of a quadrature signal, the output of the second coupler being provided to a second frequency demodulator, the first and the second frequency modulation demodulators each outputting a frequency demodulated signal to the digital signal processor. 50. A method of communicating an electrical signal over a forward channel of an electrical power line comprising the steps of: connecting one pair of terminals of respective transmitting and receiving couplers to the electrical power line; connecting respective modulating and demodulating digital signal processors to a second pair of terminals of the respective transmitting and receiving couplers; and connecting a source of a baseband signal to an input of the modulating digital signal processor, wherein the modulating digital signal processor modulates the baseband signal received from the source of the baseband signal with an orthogonal frequency division multiplex type of modulation; transmitting the orthogonal frequency division multiplexed baseband signal over the power line; and receiving and demodulating the orthogonal frequency division multiplexed baseband signal such that the source baseband signal is substantially recreated. 51. The method of claim 50, further including the step of selecting at least one of a particular error correction code and a particular type of modulation based on a quality of the forward channel. 52. The method of claim 50, further including the step of frequency modulating the orthogonal frequency division multiplexed signal output from the modulating digital signal processor and frequency demodulating the frequency modulated signal prior to providing the frequency modulated signal to the demodulating digital signal processor. 53. The method of claim 52, further including the step of splitting the signal output from the modulating digital signal processor into in-phase and quadrature signals. 54. An apparatus for transmitting a communication signal over a transmission media comprising: a digital signal processor adapted to processing an incoming baseband signal and outputting an orthogonal frequency division multiplex output signal; and a first frequency modulation modulator, said first frequency modulation modulator receiving the output signal of said digital signal processor and outputting a frequency modulated signal to the transmission media. 55. The apparatus of claim 54, further including a second frequency modulation modulator wherein the output signal of the digital signal processor consists of an in-phase signal and a quadrature signal, the in-phase signal being provided to the first frequency modulation modulator and the quadrature signal being provided to the second frequency modulation modulator, the first and the second frequency modulation modulators each outputting a frequency modulated output signal to the transmission media. 56. An apparatus for receiving a communication signal from a transmission media comprising: a first frequency demodulator, said first frequency demodulator adapted to receiving an output from the transmission media and outputting a frequency demodulated signal; and a digital signal processor operatively connected to an output of the first frequency demodulator, said digital signal processor being adapted to processing an orthogonal frequency division multiplex communication signal and outputting a baseband signal. 57. The apparatus of claim 56, wherein an output of the transmission media consists of an in-phase signal and a quadrature signal, the in-phase signal being provided to the first frequency modulation demodulator and the quadrature signal being provided to a second frequency demodulator, the first and the second frequency modulation demodulators each outputting a frequency demodulated signal to the digital signal processor.
The present invention provides an FM Orthogonal Frequency Division Multiplexing (OFDM) modulation process that enable high-speed data communications over any transmission media and networks. The process is implemented with a modem device modulator and demodulator that provides communication with several other modem devices along any communication media that uses an FM OFDM modulation technique, a physical transmission medium such as power lines, or wireless (air), or cable, or twisted pairs communication media.1-35. (canceled) 36. An apparatus for transmitting an orthogonal frequency division multiplex communication signal over a forward channel of an electrical power line comprising: a digital signal processor adapted to processing an incoming baseband signal and outputting an orthogonal frequency division multiplex output signal; and a first coupler operatively connected to an output of the digital signal processor, said coupler being adapted to providing the output signal from the digital signal processor to the power line, wherein the digital signal processor includes at least one of a code selection unit and a modulation selection unit, said at least one code selection unit and said modulation selection unit being capable of selecting respectively, a particular error correction code and a particular type of modulation based on a quality of the forward channel. 37. The apparatus of claim 36, wherein the first coupler is a capacitive transformer. 38. The apparatus of claim 36, wherein the first coupler is a resin filled transformer. 39. The apparatus of claim 36, wherein the first coupler is a dielectric filled transformer. 40. The apparatus of claim 36, wherein the first coupler matches the power line characteristic impedance to the first coupler and the first coupler matches the characteristic impedance of the transmitter to the coupler. 41. The apparatus of claim 36, further including a first frequency modulation modulator, said first frequency modulation modulator receiving the output of said digital signal processor and outputting a frequency modulated signal to the first coupler. 42. The apparatus of claim 36, wherein the output of the digital signal processor consists of an in-phase signal and a quadrature signal, the in-phase signal being provided to the first frequency modulator and the quadrature signal being provided to a second frequency modulation modulator, the second frequency modulation modulator outputting a frequency modulated output signal to a second coupler. 43. An apparatus for receiving an orthogonal frequency division multiplex communication signal transmitted over a forward channel of an electrical power line comprising: a first coupler adapted to connecting to the electrical power line and receiving the orthogonal frequency division multiplex communication signal from the electrical power line; and a digital signal processor operatively connected to an output of the first coupler, said digital signal processor being particularly adapted to processing the orthogonal frequency division multiplex communication signal and outputting a baseband signal, wherein the digital signal processor includes at least one of a decoder selection unit and a demodulation selection unit, said at least one of said decoder selection unit and said demodulation selection unit being capable of selecting respectively, a particular error correction code and a particular type of demodulation based on the quality of the forward channel. 44. The apparatus of claim 43, wherein the first coupler is a capacitive transformer. 45. The apparatus of claim 43, wherein the first coupler is a resin filled transformer. 46. The apparatus of claim 43, wherein the first coupler is a dielectric filled transformer. 47. The apparatus of claim 43, wherein the first coupler matches the power line characteristic impedance to the first coupler and the first coupler matches the characteristic impedance of the receiver to the coupler. 48. The apparatus of claim 43, further including a first frequency demodulator, said first frequency demodulator receiving the output of said first coupler and providing a frequency demodulated signal to the digital signal processor. 49. The apparatus of claim 43, further including a second coupler, wherein an output of the first coupler consists of an in-phase signal and an output of the second coupler consists of a quadrature signal, the output of the second coupler being provided to a second frequency demodulator, the first and the second frequency modulation demodulators each outputting a frequency demodulated signal to the digital signal processor. 50. A method of communicating an electrical signal over a forward channel of an electrical power line comprising the steps of: connecting one pair of terminals of respective transmitting and receiving couplers to the electrical power line; connecting respective modulating and demodulating digital signal processors to a second pair of terminals of the respective transmitting and receiving couplers; and connecting a source of a baseband signal to an input of the modulating digital signal processor, wherein the modulating digital signal processor modulates the baseband signal received from the source of the baseband signal with an orthogonal frequency division multiplex type of modulation; transmitting the orthogonal frequency division multiplexed baseband signal over the power line; and receiving and demodulating the orthogonal frequency division multiplexed baseband signal such that the source baseband signal is substantially recreated. 51. The method of claim 50, further including the step of selecting at least one of a particular error correction code and a particular type of modulation based on a quality of the forward channel. 52. The method of claim 50, further including the step of frequency modulating the orthogonal frequency division multiplexed signal output from the modulating digital signal processor and frequency demodulating the frequency modulated signal prior to providing the frequency modulated signal to the demodulating digital signal processor. 53. The method of claim 52, further including the step of splitting the signal output from the modulating digital signal processor into in-phase and quadrature signals. 54. An apparatus for transmitting a communication signal over a transmission media comprising: a digital signal processor adapted to processing an incoming baseband signal and outputting an orthogonal frequency division multiplex output signal; and a first frequency modulation modulator, said first frequency modulation modulator receiving the output signal of said digital signal processor and outputting a frequency modulated signal to the transmission media. 55. The apparatus of claim 54, further including a second frequency modulation modulator wherein the output signal of the digital signal processor consists of an in-phase signal and a quadrature signal, the in-phase signal being provided to the first frequency modulation modulator and the quadrature signal being provided to the second frequency modulation modulator, the first and the second frequency modulation modulators each outputting a frequency modulated output signal to the transmission media. 56. An apparatus for receiving a communication signal from a transmission media comprising: a first frequency demodulator, said first frequency demodulator adapted to receiving an output from the transmission media and outputting a frequency demodulated signal; and a digital signal processor operatively connected to an output of the first frequency demodulator, said digital signal processor being adapted to processing an orthogonal frequency division multiplex communication signal and outputting a baseband signal. 57. The apparatus of claim 56, wherein an output of the transmission media consists of an in-phase signal and a quadrature signal, the in-phase signal being provided to the first frequency modulation demodulator and the quadrature signal being provided to a second frequency demodulator, the first and the second frequency modulation demodulators each outputting a frequency demodulated signal to the digital signal processor.
2,600
10,253
10,253
14,719,652
2,661
An apparatus, method, system and computer-readable medium are provided for generating one or more descriptors that may potentially be associated with content, such as video or a segment of video. In some embodiments, a teaser for the content may be identified based on contextual similarity between words and/or phrases in the segment and one or more other segments, such as a previous segment. In some embodiments, an optical character recognition (OCR) technique may be applied to the content, such as banners or graphics associated with the content in order to generate or identify OCR'd text or characters. The text/characters may serve as a candidate descriptor(s). In some embodiments, one or more strings of characters or words may be compared with (pre-assigned) tags associated with the content, and if it is determined that the one or more strings or words match the tags within a threshold, the one or more strings or words may serve as a candidate descriptor(s). One or more candidate descriptor identification techniques may be combined.
1. (canceled) 2. A method comprising: receiving, by a computing device, descriptive data for at least a first segment of a content item and a second segment of the content item; comparing, by the computing device and based at least in part on the descriptive data, first segment words with second segment words, wherein the first segment words comprise a plurality of words from the first segment from the first segment and second segment words comprise one or more words from the second segment; determining, by the computing device and based at least in part on the comparing, that the first segment words are descriptive of the second segment; and determining, by the computing device and responsive to determining that the first segment words are descriptive of the second segment, the first segment words as a candidate descriptor of the second segment. 3. The method of claim 2, wherein the second segment precedes the first segment in the content item. 4. The method of claim 2, wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based at least in part on salient tag detection and a threshold. 5. The method of claim 2, wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based at least in part on the first segment words being within a threshold distance of a commercial break. 6. The method of claim 2, further comprising: iteratively comparing the second segment words with a plurality of other words from the first segment; determining, for each of the other words from the first segment, whether one or more of the other words are descriptive of the second segment; and based on the one or more of the other words being descriptive of the second segment, determining the one or more of the other words as an another candidate descriptor of the second segment. 7. The method of claim 2, wherein the descriptive data comprises a transcript that is associated with at least the first segment, the second segment, and a third segment of the content item, the method further comprising: comparing third segment words with the second segment words, wherein the third segment words comprise one or more words included in the third segment; determining that the third segment words are descriptive of the second segment; and responsive to determining that the third segment words are descriptive of the second segment, determining the third segment words as a second candidate descriptor of the second segment. 8. The method of claim 2, wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based on a normalization of a result generated by the comparing. 9. The method of claim 2, wherein the first segment words comprise at least one of a phrase and a sentence, and wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based at least in part on the first segment being longer than a threshold. 10. The method of claim 2, further comprising determining a relationship strength between the descriptive data for the first segment and descriptive data for at least one additional segment within a threshold proximity to the first segment; and determining the first segment words based on the relationship strength of the first segment words being below a maximum threshold. 11. A method comprising: receiving, by a computing device, descriptive data for a segment of a content item; receiving, by the computing device, at least one tag assigned to the content item; iteratively comparing, by the computing device, the descriptive data and the at least one tag; and determining, by the computing device, whether to add the descriptive data to a candidate descriptor for the content item based on the comparing. 12. The method of claim 11, wherein the descriptive data comprises a transcript. 13. The method of claim 11, wherein the comparing the descriptive data and the at least one tag further comprises determining a relationship strength of the descriptive data relative to the at least one tag. 14. The method of claim 13, wherein the determining whether to add the descriptive data to the candidate descriptor for the content item based on the comparing comprises determining to add at least one portion of the descriptive data based on the relationship strength exceeding a minimum strength level. 15. The method of claim 11, further comprising: determining a relationship strength for the descriptive data relative to descriptive data for at least one additional segment, wherein the additional segment is within a threshold proximity to the segment; and wherein the determining whether to add the descriptive data to the candidate descriptor for the content item based on the comparing comprises determining whether to add the descriptive data to the candidate descriptor for the content item based on the comparing and the relationship strength of the descriptive data being below a maximum threshold. 16. The method of claim 11, wherein the comparing the descriptive data and the at least one tag comprises comparing the descriptive data and the at least one tag using salient tag detection. 17. The method of claim 16, wherein the salient tag detection is adjusted based on results from a user survey. 18. A method comprising: determining, by a computing device, a first relationship strength between a first segment of a content item and a second segment of the content item by iteratively comparing words from the first segment with words from the second segment; and determining, by the computing device, whether to use the words from the first segment as a candidate descriptor for the second segment responsive to the first relationship strength exceeding a minimum threshold using the words from the first segment and the words from the second segment. 19. The method of claim 18, further comprising: determining a second relationship strength by iteratively comparing words from a third segment with words from the second segment; and determining whether to use the words from the third segment as the candidate descriptor for the second segment responsive to the second relationship strength exceeding the first relationship strength. 20. The method of claim 18, wherein the determining the first relationship strength comprises determining the first relationship strength by iteratively comparing words from the first segment with words from the second segment using salient tag detection. 21. The method of claim 20, wherein the salient tag detection is adjusted based on results from a user survey.
An apparatus, method, system and computer-readable medium are provided for generating one or more descriptors that may potentially be associated with content, such as video or a segment of video. In some embodiments, a teaser for the content may be identified based on contextual similarity between words and/or phrases in the segment and one or more other segments, such as a previous segment. In some embodiments, an optical character recognition (OCR) technique may be applied to the content, such as banners or graphics associated with the content in order to generate or identify OCR'd text or characters. The text/characters may serve as a candidate descriptor(s). In some embodiments, one or more strings of characters or words may be compared with (pre-assigned) tags associated with the content, and if it is determined that the one or more strings or words match the tags within a threshold, the one or more strings or words may serve as a candidate descriptor(s). One or more candidate descriptor identification techniques may be combined.1. (canceled) 2. A method comprising: receiving, by a computing device, descriptive data for at least a first segment of a content item and a second segment of the content item; comparing, by the computing device and based at least in part on the descriptive data, first segment words with second segment words, wherein the first segment words comprise a plurality of words from the first segment from the first segment and second segment words comprise one or more words from the second segment; determining, by the computing device and based at least in part on the comparing, that the first segment words are descriptive of the second segment; and determining, by the computing device and responsive to determining that the first segment words are descriptive of the second segment, the first segment words as a candidate descriptor of the second segment. 3. The method of claim 2, wherein the second segment precedes the first segment in the content item. 4. The method of claim 2, wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based at least in part on salient tag detection and a threshold. 5. The method of claim 2, wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based at least in part on the first segment words being within a threshold distance of a commercial break. 6. The method of claim 2, further comprising: iteratively comparing the second segment words with a plurality of other words from the first segment; determining, for each of the other words from the first segment, whether one or more of the other words are descriptive of the second segment; and based on the one or more of the other words being descriptive of the second segment, determining the one or more of the other words as an another candidate descriptor of the second segment. 7. The method of claim 2, wherein the descriptive data comprises a transcript that is associated with at least the first segment, the second segment, and a third segment of the content item, the method further comprising: comparing third segment words with the second segment words, wherein the third segment words comprise one or more words included in the third segment; determining that the third segment words are descriptive of the second segment; and responsive to determining that the third segment words are descriptive of the second segment, determining the third segment words as a second candidate descriptor of the second segment. 8. The method of claim 2, wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based on a normalization of a result generated by the comparing. 9. The method of claim 2, wherein the first segment words comprise at least one of a phrase and a sentence, and wherein the determining that the first segment words are descriptive of the second segment comprises determining that the first segment words are descriptive of the second segment based at least in part on the first segment being longer than a threshold. 10. The method of claim 2, further comprising determining a relationship strength between the descriptive data for the first segment and descriptive data for at least one additional segment within a threshold proximity to the first segment; and determining the first segment words based on the relationship strength of the first segment words being below a maximum threshold. 11. A method comprising: receiving, by a computing device, descriptive data for a segment of a content item; receiving, by the computing device, at least one tag assigned to the content item; iteratively comparing, by the computing device, the descriptive data and the at least one tag; and determining, by the computing device, whether to add the descriptive data to a candidate descriptor for the content item based on the comparing. 12. The method of claim 11, wherein the descriptive data comprises a transcript. 13. The method of claim 11, wherein the comparing the descriptive data and the at least one tag further comprises determining a relationship strength of the descriptive data relative to the at least one tag. 14. The method of claim 13, wherein the determining whether to add the descriptive data to the candidate descriptor for the content item based on the comparing comprises determining to add at least one portion of the descriptive data based on the relationship strength exceeding a minimum strength level. 15. The method of claim 11, further comprising: determining a relationship strength for the descriptive data relative to descriptive data for at least one additional segment, wherein the additional segment is within a threshold proximity to the segment; and wherein the determining whether to add the descriptive data to the candidate descriptor for the content item based on the comparing comprises determining whether to add the descriptive data to the candidate descriptor for the content item based on the comparing and the relationship strength of the descriptive data being below a maximum threshold. 16. The method of claim 11, wherein the comparing the descriptive data and the at least one tag comprises comparing the descriptive data and the at least one tag using salient tag detection. 17. The method of claim 16, wherein the salient tag detection is adjusted based on results from a user survey. 18. A method comprising: determining, by a computing device, a first relationship strength between a first segment of a content item and a second segment of the content item by iteratively comparing words from the first segment with words from the second segment; and determining, by the computing device, whether to use the words from the first segment as a candidate descriptor for the second segment responsive to the first relationship strength exceeding a minimum threshold using the words from the first segment and the words from the second segment. 19. The method of claim 18, further comprising: determining a second relationship strength by iteratively comparing words from a third segment with words from the second segment; and determining whether to use the words from the third segment as the candidate descriptor for the second segment responsive to the second relationship strength exceeding the first relationship strength. 20. The method of claim 18, wherein the determining the first relationship strength comprises determining the first relationship strength by iteratively comparing words from the first segment with words from the second segment using salient tag detection. 21. The method of claim 20, wherein the salient tag detection is adjusted based on results from a user survey.
2,600
10,254
10,254
14,885,139
2,626
Systems and methods are provides for displaying a desktop for a multi-screen device in response to opening the device. The window stack can change based on the change in the orientation of the device. The system can receive an orientation change that transitions the device from a closed state to an open state. A previously created in the stack can expand over the area of the two or more displays comprising the device when opened. A desktop expands to fill the display area and be displayed on the second of the displays after the device is opened.
1-20. (canceled) 21. A non-transitory computer readable medium, having stored thereon, computer-executable instructions executable by a processor, the computer-executable instructions causing the processor to execute a method for managing window stacks for a mobile device, the computer-executable instructions comprising: instructions to create a composite display having a first portion and a second portion; instructions to create a first window stack logically associated with the first portion of the composite display, wherein the first window stack is a data structure representing a logical arrangement of one or more of active windows, inactive windows, and a desktop executing on the mobile device; instructions to display a first window of a first open application that is at a logical top position of the first window stack on the first portion of the composite display; instructions to receive a change in the mobile device from a first state to a second state; in response to the change to the second state, instructions to create a second window stack logically associated with the second portion of the composite display, wherein the second window stack is a second data structure representing a second logical arrangement of one or more active windows, inactive windows, and the desktop executing on the mobile device; instructions to determine a second window or desktop associated with the second window stack to display on the second portion of the composite display, wherein the second window or desktop comprises a window or desktop in the second window stack that is active and that is at a logical top position of the second window stack; and after determining the desktop is at the logical top position of the second window stack and should be displayed on the second portion of the composite display, instructions to display the desktop on the second portion of the composite display and the first window on the first portion of the composite display. 22. The non-transitory computer readable medium as defined in claim 21, wherein each of the first and second window stacks include one or more of: a window identifier adapted to identify each window and the desktop in relation to other windows and the desktop in one of the first and second window stacks; and a window stack position identifier adapted to identify a logical position of each window or desktop in one of the first and second window stacks. 23. The non-transitory computer readable medium as defined in claim 21, wherein the first window stack can have a different number of windows or desktops than the second window stack, and wherein a window can be moved from one of the first and second window stacks to the other of the first and second window stacks. 24. A device comprising: a composite display having a first portion and a second portion; a memory; and a processor in communication with the memory and the composite display, the processor operable to: create a first window stack logically associated with the first portion of the composite display, wherein the first window stack is a data structure representing a logical arrangement of one or more of active windows, inactive windows, and a desktop executing on the device; display a first window that is at a logical top position of the first window stack on the first portion of the composite display; receive a change in the device from a first state to a second state; in response to the change to the second state, create a second window stack logically associated with the second portion of the composite display, wherein the second window stack is a second data structure representing a second logical arrangement of one or more of active windows, inactive windows, and the desktop executing on the device; determine a second window or desktop associated with the second window stack to display on the second portion of the composite display; determine that the second window is at a logical top position of the second window stack; and display the first window on the first portion of the composite display and the second window on the second portion of the composite display. 25. The device as defined in claim 24, wherein a first display comprises the first portion of the composite display and a second display comprises the second portion of the composite display. 26. The device as defined in claim 24, wherein each of the first and second window stacks include one or more of: a window identifier adapted to identify each window and the desktop in relation to other windows and the desktop in one of the first and second window stacks; and a window stack position identifier adapted to identify a logical position of each window or desktop in one of the first and second window stacks. 27. The device as defined in claim 25, wherein the first and second displays are on a common first screen. 28. The device as defined in claim 25, wherein the first display is on a first screen, and wherein the second display is on a different second screen. 29. A method for presenting a display of window or desktops for a device, the method comprising: creating a composite display having a first portion and a second portion; creating a first window stack logically associated with the first portion of the composite display, wherein the first window stack is a data structure representing a logical arrangement of one or more of active windows, inactive windows, and a desktop executing on the device; displaying a first window that is at a logical top position of the first window stack on the first portion of the composite display; receiving a change for the device from a first state to a second state; in response to the change to the second state, creating a second window stack logically associated with the second portion of the composite display, wherein the second window stack is a second data structure representing a second logical arrangement of one or more of active windows, inactive windows, and the desktop executing on the device; instructions to determine a second window or desktop associated with the second window stack to display on the second portion of the composite display; determining that the second window is at a logical top position of the second window stack; and displaying the first window on the first portion of the composite display and the second window on the second portion of the composite display. 30. The method defined in claim 29, wherein the desktop is at a logical bottom position of the second window stack and is inactive, wherein the second window stack includes two or more other windows that are inactive, and wherein the desktop and the two or more other windows are not displayed on the second portion of the composite display. 31. The method defined in claim 29, wherein a first display comprises the first portion of the composite display and a second display comprises the second portion of the composite display. 32. The method defined in claim 31, wherein the first display is on a first screen, and wherein the second display is on a different second screen. 33. The method defined in claim 31, wherein the first and second displays are on a common first screen. 34. The non-transitory computer readable medium as defined in claim 21, further comprising: instructions to receive an input to move the first window from the first window stack to the second window stack; instructions to modify the second window stack, wherein the first window is moved to an active position at the logical top position of the second window stack and the desktop is moved to an inactive position in the second window stack; and instructions to display the first window on the second portion of the composite display. 35. The non-transitory computer readable medium as defined in claim 34, further comprising: in response to moving the first window to the second window stack, instructions to determine a third window or desktop to display on the first portion of the composite display; and after determining that the desktop is at the logical top position of the first window stack and should be displayed on the first portion of the composite display, instructions to display the desktop on the first portion of the composite display. 36. The non-transitory computer readable medium as defined in claim 21, wherein a first display comprises the first portion of the composite display and a second display comprises the second portion of the composite display. 37. The non-transitory computer readable medium as defined in claim 36, wherein the first display is on a first screen, and wherein the second display is on a different second screen. 38. The non-transitory computer readable medium as defined in claim 36, wherein the first and second displays are on a common first screen. 39. The method defined in claim 30, further comprising: receiving an input to move the second window from the second portion of the composite display to the first portion of the composite display; modifying the first window stack to move the first window to an inactive position in the first window stack; modifying the first window stack to move the second window to the logical top position of the first window stack; and displaying the second window on the first portion of the composite display. 40. The method defined in claim 39, further comprising: in response to moving the second window to the first window stack, determining a third window or desktop to display on the second portion of the composite display; and after determining that the third window is one of the tow or more other windows that is at the logical top position of the second window stack, instructions to display the third window on the second portion of the composite display.
Systems and methods are provides for displaying a desktop for a multi-screen device in response to opening the device. The window stack can change based on the change in the orientation of the device. The system can receive an orientation change that transitions the device from a closed state to an open state. A previously created in the stack can expand over the area of the two or more displays comprising the device when opened. A desktop expands to fill the display area and be displayed on the second of the displays after the device is opened.1-20. (canceled) 21. A non-transitory computer readable medium, having stored thereon, computer-executable instructions executable by a processor, the computer-executable instructions causing the processor to execute a method for managing window stacks for a mobile device, the computer-executable instructions comprising: instructions to create a composite display having a first portion and a second portion; instructions to create a first window stack logically associated with the first portion of the composite display, wherein the first window stack is a data structure representing a logical arrangement of one or more of active windows, inactive windows, and a desktop executing on the mobile device; instructions to display a first window of a first open application that is at a logical top position of the first window stack on the first portion of the composite display; instructions to receive a change in the mobile device from a first state to a second state; in response to the change to the second state, instructions to create a second window stack logically associated with the second portion of the composite display, wherein the second window stack is a second data structure representing a second logical arrangement of one or more active windows, inactive windows, and the desktop executing on the mobile device; instructions to determine a second window or desktop associated with the second window stack to display on the second portion of the composite display, wherein the second window or desktop comprises a window or desktop in the second window stack that is active and that is at a logical top position of the second window stack; and after determining the desktop is at the logical top position of the second window stack and should be displayed on the second portion of the composite display, instructions to display the desktop on the second portion of the composite display and the first window on the first portion of the composite display. 22. The non-transitory computer readable medium as defined in claim 21, wherein each of the first and second window stacks include one or more of: a window identifier adapted to identify each window and the desktop in relation to other windows and the desktop in one of the first and second window stacks; and a window stack position identifier adapted to identify a logical position of each window or desktop in one of the first and second window stacks. 23. The non-transitory computer readable medium as defined in claim 21, wherein the first window stack can have a different number of windows or desktops than the second window stack, and wherein a window can be moved from one of the first and second window stacks to the other of the first and second window stacks. 24. A device comprising: a composite display having a first portion and a second portion; a memory; and a processor in communication with the memory and the composite display, the processor operable to: create a first window stack logically associated with the first portion of the composite display, wherein the first window stack is a data structure representing a logical arrangement of one or more of active windows, inactive windows, and a desktop executing on the device; display a first window that is at a logical top position of the first window stack on the first portion of the composite display; receive a change in the device from a first state to a second state; in response to the change to the second state, create a second window stack logically associated with the second portion of the composite display, wherein the second window stack is a second data structure representing a second logical arrangement of one or more of active windows, inactive windows, and the desktop executing on the device; determine a second window or desktop associated with the second window stack to display on the second portion of the composite display; determine that the second window is at a logical top position of the second window stack; and display the first window on the first portion of the composite display and the second window on the second portion of the composite display. 25. The device as defined in claim 24, wherein a first display comprises the first portion of the composite display and a second display comprises the second portion of the composite display. 26. The device as defined in claim 24, wherein each of the first and second window stacks include one or more of: a window identifier adapted to identify each window and the desktop in relation to other windows and the desktop in one of the first and second window stacks; and a window stack position identifier adapted to identify a logical position of each window or desktop in one of the first and second window stacks. 27. The device as defined in claim 25, wherein the first and second displays are on a common first screen. 28. The device as defined in claim 25, wherein the first display is on a first screen, and wherein the second display is on a different second screen. 29. A method for presenting a display of window or desktops for a device, the method comprising: creating a composite display having a first portion and a second portion; creating a first window stack logically associated with the first portion of the composite display, wherein the first window stack is a data structure representing a logical arrangement of one or more of active windows, inactive windows, and a desktop executing on the device; displaying a first window that is at a logical top position of the first window stack on the first portion of the composite display; receiving a change for the device from a first state to a second state; in response to the change to the second state, creating a second window stack logically associated with the second portion of the composite display, wherein the second window stack is a second data structure representing a second logical arrangement of one or more of active windows, inactive windows, and the desktop executing on the device; instructions to determine a second window or desktop associated with the second window stack to display on the second portion of the composite display; determining that the second window is at a logical top position of the second window stack; and displaying the first window on the first portion of the composite display and the second window on the second portion of the composite display. 30. The method defined in claim 29, wherein the desktop is at a logical bottom position of the second window stack and is inactive, wherein the second window stack includes two or more other windows that are inactive, and wherein the desktop and the two or more other windows are not displayed on the second portion of the composite display. 31. The method defined in claim 29, wherein a first display comprises the first portion of the composite display and a second display comprises the second portion of the composite display. 32. The method defined in claim 31, wherein the first display is on a first screen, and wherein the second display is on a different second screen. 33. The method defined in claim 31, wherein the first and second displays are on a common first screen. 34. The non-transitory computer readable medium as defined in claim 21, further comprising: instructions to receive an input to move the first window from the first window stack to the second window stack; instructions to modify the second window stack, wherein the first window is moved to an active position at the logical top position of the second window stack and the desktop is moved to an inactive position in the second window stack; and instructions to display the first window on the second portion of the composite display. 35. The non-transitory computer readable medium as defined in claim 34, further comprising: in response to moving the first window to the second window stack, instructions to determine a third window or desktop to display on the first portion of the composite display; and after determining that the desktop is at the logical top position of the first window stack and should be displayed on the first portion of the composite display, instructions to display the desktop on the first portion of the composite display. 36. The non-transitory computer readable medium as defined in claim 21, wherein a first display comprises the first portion of the composite display and a second display comprises the second portion of the composite display. 37. The non-transitory computer readable medium as defined in claim 36, wherein the first display is on a first screen, and wherein the second display is on a different second screen. 38. The non-transitory computer readable medium as defined in claim 36, wherein the first and second displays are on a common first screen. 39. The method defined in claim 30, further comprising: receiving an input to move the second window from the second portion of the composite display to the first portion of the composite display; modifying the first window stack to move the first window to an inactive position in the first window stack; modifying the first window stack to move the second window to the logical top position of the first window stack; and displaying the second window on the first portion of the composite display. 40. The method defined in claim 39, further comprising: in response to moving the second window to the first window stack, determining a third window or desktop to display on the second portion of the composite display; and after determining that the third window is one of the tow or more other windows that is at the logical top position of the second window stack, instructions to display the third window on the second portion of the composite display.
2,600
10,255
10,255
15,109,676
2,651
The invention relates to a method for reproducing audio in a multi-channel sound system including two input signals (L and R), wherein output signals are generated for different sound perception levels. In order to develop said method in such a way that audio can be reproduced within a larger range of applications in a multi-channel sound system, according to the invention, only a lower sound perception level ( 7 ) and a higher sound perception level ( 6 ) are generated, and a maximum of six output signals are generated, a maximum of two output signals being allocated to the lower sound perception level ( 7 ) and a maximum of four output signals being allocated to the higher sound perception level ( 6 ).
1. A method for audio reproduction in a multi-channel sound system comprising two input signals L and R, wherein output signals are generated for different listening levels, characterized in that only one lower listening level (7) and only one upper listening level (6) are generated, wherein a maximum of six output signals, with a maximum of two output signals for the lower listening level (7) and a maximum of four output signals for the upper listening level (6), are generated. 2. The method according to claim 1, characterized in that stereo signals are generated for the signals in the lower listening level (7) and upper listening levels (6). 3. The method according to claim 1, characterized in that mono signals are generated for the signals in the lower listening level (7) and upper listening level (6). 4. The method according to claim 1, characterized in that mono signals are generated for the signals in the lower listening level (7). 5. The method according to claim 1, characterized in that mono signals are generated for the signals in the upper listening level (7). 6. The method according to one of the claims 1 to 5, characterized in that the output signals serve as further input signals. 7. The method according to one of the claims 1 to 6, characterized in that, channels are decoded from the input channels intended for the input signals R and L. 8. The method according to claim 7, characterized in that the decoded channels are generated in the form of a left spatial channel RL=L−R, a right spatial channel RR=R−L as well as a center channel C=L+R. 9. The method according to one of the claim 7 or 8, characterized in that channels (8, 9), guided linear and parallel to the decoded channels, are generated from the input channels. 10. The method according to claim 9, characterized in that R and L are generated as output signals for the lower listening level (7). 11. The method according to one of the claims 6 to 10, characterized in that the decoded signals are processed further to output signals of the higher listening level (6). 12. The method according to one of the preceding claims, characterized in that at least a portion of the input channels and/or the output channels are added to one another. 13. The method according to one of the preceding claims, characterized in that, at most, two output signals for the lower listening level (7) and, at most, two output signals for the upper listening level (6) are generated. 14. A device with sound input and sound output channels, as well as a processor, wherein loudspeakers (26, 27, 33) are assigned to the device, characterized in that a software is imported onto the processor, which contains an algorithm, which is processed by the processor, wherein the algorithm covering the method according to one of the claims 1 to 9. 15. The device according to claim 14, characterized in that it has picture input and picture output channels.
The invention relates to a method for reproducing audio in a multi-channel sound system including two input signals (L and R), wherein output signals are generated for different sound perception levels. In order to develop said method in such a way that audio can be reproduced within a larger range of applications in a multi-channel sound system, according to the invention, only a lower sound perception level ( 7 ) and a higher sound perception level ( 6 ) are generated, and a maximum of six output signals are generated, a maximum of two output signals being allocated to the lower sound perception level ( 7 ) and a maximum of four output signals being allocated to the higher sound perception level ( 6 ).1. A method for audio reproduction in a multi-channel sound system comprising two input signals L and R, wherein output signals are generated for different listening levels, characterized in that only one lower listening level (7) and only one upper listening level (6) are generated, wherein a maximum of six output signals, with a maximum of two output signals for the lower listening level (7) and a maximum of four output signals for the upper listening level (6), are generated. 2. The method according to claim 1, characterized in that stereo signals are generated for the signals in the lower listening level (7) and upper listening levels (6). 3. The method according to claim 1, characterized in that mono signals are generated for the signals in the lower listening level (7) and upper listening level (6). 4. The method according to claim 1, characterized in that mono signals are generated for the signals in the lower listening level (7). 5. The method according to claim 1, characterized in that mono signals are generated for the signals in the upper listening level (7). 6. The method according to one of the claims 1 to 5, characterized in that the output signals serve as further input signals. 7. The method according to one of the claims 1 to 6, characterized in that, channels are decoded from the input channels intended for the input signals R and L. 8. The method according to claim 7, characterized in that the decoded channels are generated in the form of a left spatial channel RL=L−R, a right spatial channel RR=R−L as well as a center channel C=L+R. 9. The method according to one of the claim 7 or 8, characterized in that channels (8, 9), guided linear and parallel to the decoded channels, are generated from the input channels. 10. The method according to claim 9, characterized in that R and L are generated as output signals for the lower listening level (7). 11. The method according to one of the claims 6 to 10, characterized in that the decoded signals are processed further to output signals of the higher listening level (6). 12. The method according to one of the preceding claims, characterized in that at least a portion of the input channels and/or the output channels are added to one another. 13. The method according to one of the preceding claims, characterized in that, at most, two output signals for the lower listening level (7) and, at most, two output signals for the upper listening level (6) are generated. 14. A device with sound input and sound output channels, as well as a processor, wherein loudspeakers (26, 27, 33) are assigned to the device, characterized in that a software is imported onto the processor, which contains an algorithm, which is processed by the processor, wherein the algorithm covering the method according to one of the claims 1 to 9. 15. The device according to claim 14, characterized in that it has picture input and picture output channels.
2,600
10,256
10,256
15,073,023
2,646
A system for controlling access of a mobile electronic device to a vehicle network includes a body area network for carrying signals through an occupant of a vehicle. The network includes an electrode located in proximity to an occupant of the vehicle and, a signal generator for providing a signal to the electrode. The mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network and is configured to electronically couple to the vehicle network upon detecting the signal. The vehicle network is configured to provide a security token to the mobile electronic device after the mobile electronic device is electronically coupled to the vehicle network. The vehicle network is configured to restrict access to the network to only mobile electronic devices having the provided security token.
1. A system for controlling access of a mobile electronic device to vehicle network, comprising: a body area network for carrying signals through an occupant of a vehicle, wherein the network includes an electrode located in proximity to an occupant of the vehicle and, a signal generator for providing a signal to the electrode; wherein the mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network; wherein the mobile electronic device is configured to electronically couple to the vehicle network upon detecting the signal; wherein the vehicle network is configured to provide a security token to the mobile electronic device after the mobile electronic device is electronically coupled to the vehicle network; and wherein the vehicle network is configured to restrict access to the network to only mobile electronic devices having the provided security token. 2. The system of claim 1, wherein the electronic device is configured to require the user to interact with the mobile device to acknowledge receipt of the security token. 3. The system of claim 1, wherein the vehicle network is configured to no longer accept the provided security token after a predetermined time period. 4. The system of claim 1, wherein the vehicle network is configured to restrict access to the network unless the mobile electronic device provides indication that the mobile electronic device is connected to the body area network. 5. A system for controlling access of a mobile electronic device to vehicle network, comprising: a body area network for carrying signals through an occupant of a vehicle, wherein the network includes an electrode located in proximity to an occupant of the vehicle, and a signal generator for providing a signal to the electrode; wherein the mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network; wherein the mobile electronic device is configured to electronically couple to the vehicle network upon detecting the signal; wherein the vehicle network is configured to only allow the mobile electronic device to couple with the network if the mobile electronic device provides an accepted security token to the vehicle network and an indication that the mobile electronic device is coupled with the body area network. 6. The system of claim 5, wherein the vehicle includes a second body area network and wherein the vehicle network is configured to change the level of access provided to the mobile electronic device based on the body area network that the mobile electronic device is coupled to. 7. The system of claim 6, wherein the level of access provided to the mobile electronic device is different for a mobile electronic device coupled to the first body area network operating through an occupant positioned in the driver seat than for a mobile electronic device coupled to the second body area network operating through an occupant positioned in a passenger seat of the vehicle. 8. The system of claim 7, wherein the mobile electronic device coupled to the first mentioned body area network has less restrictive access than the mobile electronic device coupled to the second body area network. 9. The system of claim 5, wherein vehicle network is configured to adjust the access level provided to the mobile electronic device based on information stored by the mobile electronic device and provided to the vehicle network. 10. The system of claim 5, wherein the vehicle network is configured to prevent operation of the vehicle unless the mobile electronic device is coupled to the network. 11. The system of claim 5, wherein the accepted security token is provided to the mobile electronic device through a registration process that identifies the occupant as an authorized driver of the vehicle. 12. The system of claim 11, wherein the security token is provided to the mobile electronic device from a source other than the vehicle network. 13. The system of claim 11, wherein the connection between the mobile electronic device and the vehicle network is discontinued if the mobile electronic device is no longer coupled with the body area network. 14. The system of claim 13, wherein the connection between the mobile electronic device and the vehicle network is discontinued if the mobile electronic device is not located in the vehicle. 15. The system of claim 14, wherein the location of the mobile device is determined using an indoor positioning system. 16. A system for controlling access of a mobile electronic device to vehicle network, comprising: a body area network for carrying signals through an occupant of a vehicle, wherein the network includes an electrode located in proximity to an occupant of the vehicle; and, a signal generator for providing a signal to the electrode; wherein the mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network; wherein the mobile electronic device is configured to electronically couple to the vehicle network upon detecting the signal; wherein the vehicle network is configured to provide a time limited security token to the mobile electronic device after the mobile electronic device is electronically coupled to the vehicle network; wherein the security token is inactivated after a predetermined time period; and wherein the vehicle network is configured to restrict access to the network to only mobile electronic devices having the provided security token within the predetermined time period. 17. The system of claim 16, further comprising a proximity sensor located in the vicinity of the occupant; and wherein the connection between the mobile electronic device and the vehicle network is discontinued if the proximity sensor failed to detect the presence of the occupant for specified time period. 18. The system of claim 17, wherein the proximity sensor is located in one of a floor of the vehicle, a steering wheel, a vehicle seat, or a vehicle seat belt. 19. The system of claim 16, wherein the vehicle network is configured to restrict access to the network unless the mobile electronic device provides indication that the mobile electronic device is connected to the body area network.
A system for controlling access of a mobile electronic device to a vehicle network includes a body area network for carrying signals through an occupant of a vehicle. The network includes an electrode located in proximity to an occupant of the vehicle and, a signal generator for providing a signal to the electrode. The mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network and is configured to electronically couple to the vehicle network upon detecting the signal. The vehicle network is configured to provide a security token to the mobile electronic device after the mobile electronic device is electronically coupled to the vehicle network. The vehicle network is configured to restrict access to the network to only mobile electronic devices having the provided security token.1. A system for controlling access of a mobile electronic device to vehicle network, comprising: a body area network for carrying signals through an occupant of a vehicle, wherein the network includes an electrode located in proximity to an occupant of the vehicle and, a signal generator for providing a signal to the electrode; wherein the mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network; wherein the mobile electronic device is configured to electronically couple to the vehicle network upon detecting the signal; wherein the vehicle network is configured to provide a security token to the mobile electronic device after the mobile electronic device is electronically coupled to the vehicle network; and wherein the vehicle network is configured to restrict access to the network to only mobile electronic devices having the provided security token. 2. The system of claim 1, wherein the electronic device is configured to require the user to interact with the mobile device to acknowledge receipt of the security token. 3. The system of claim 1, wherein the vehicle network is configured to no longer accept the provided security token after a predetermined time period. 4. The system of claim 1, wherein the vehicle network is configured to restrict access to the network unless the mobile electronic device provides indication that the mobile electronic device is connected to the body area network. 5. A system for controlling access of a mobile electronic device to vehicle network, comprising: a body area network for carrying signals through an occupant of a vehicle, wherein the network includes an electrode located in proximity to an occupant of the vehicle, and a signal generator for providing a signal to the electrode; wherein the mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network; wherein the mobile electronic device is configured to electronically couple to the vehicle network upon detecting the signal; wherein the vehicle network is configured to only allow the mobile electronic device to couple with the network if the mobile electronic device provides an accepted security token to the vehicle network and an indication that the mobile electronic device is coupled with the body area network. 6. The system of claim 5, wherein the vehicle includes a second body area network and wherein the vehicle network is configured to change the level of access provided to the mobile electronic device based on the body area network that the mobile electronic device is coupled to. 7. The system of claim 6, wherein the level of access provided to the mobile electronic device is different for a mobile electronic device coupled to the first body area network operating through an occupant positioned in the driver seat than for a mobile electronic device coupled to the second body area network operating through an occupant positioned in a passenger seat of the vehicle. 8. The system of claim 7, wherein the mobile electronic device coupled to the first mentioned body area network has less restrictive access than the mobile electronic device coupled to the second body area network. 9. The system of claim 5, wherein vehicle network is configured to adjust the access level provided to the mobile electronic device based on information stored by the mobile electronic device and provided to the vehicle network. 10. The system of claim 5, wherein the vehicle network is configured to prevent operation of the vehicle unless the mobile electronic device is coupled to the network. 11. The system of claim 5, wherein the accepted security token is provided to the mobile electronic device through a registration process that identifies the occupant as an authorized driver of the vehicle. 12. The system of claim 11, wherein the security token is provided to the mobile electronic device from a source other than the vehicle network. 13. The system of claim 11, wherein the connection between the mobile electronic device and the vehicle network is discontinued if the mobile electronic device is no longer coupled with the body area network. 14. The system of claim 13, wherein the connection between the mobile electronic device and the vehicle network is discontinued if the mobile electronic device is not located in the vehicle. 15. The system of claim 14, wherein the location of the mobile device is determined using an indoor positioning system. 16. A system for controlling access of a mobile electronic device to vehicle network, comprising: a body area network for carrying signals through an occupant of a vehicle, wherein the network includes an electrode located in proximity to an occupant of the vehicle; and, a signal generator for providing a signal to the electrode; wherein the mobile electronic device is configured to detect the signal provided to the electrode and conducted through the body area network; wherein the mobile electronic device is configured to electronically couple to the vehicle network upon detecting the signal; wherein the vehicle network is configured to provide a time limited security token to the mobile electronic device after the mobile electronic device is electronically coupled to the vehicle network; wherein the security token is inactivated after a predetermined time period; and wherein the vehicle network is configured to restrict access to the network to only mobile electronic devices having the provided security token within the predetermined time period. 17. The system of claim 16, further comprising a proximity sensor located in the vicinity of the occupant; and wherein the connection between the mobile electronic device and the vehicle network is discontinued if the proximity sensor failed to detect the presence of the occupant for specified time period. 18. The system of claim 17, wherein the proximity sensor is located in one of a floor of the vehicle, a steering wheel, a vehicle seat, or a vehicle seat belt. 19. The system of claim 16, wherein the vehicle network is configured to restrict access to the network unless the mobile electronic device provides indication that the mobile electronic device is connected to the body area network.
2,600
10,257
10,257
16,265,608
2,684
A multi-tenant, RFID system that may be Cloud based or run on a local area network (LAN) for distributed RFID devices and RFID applications. The RFID system a central abstraction and translation layer between RFID devices installed in geographically diverse locations and applications. RFID devices initiate communication to a Cloud or LAN network over to send events and receive commands. RFID applications can receive RFID tag data, device health, and requested and derived events from the RFID system to automatically run processes based on the provided data. Applications manage RFID devices and settings in the RFID system using command and configuration interfaces.
1-22. (canceled) 23. A method for operating a radio frequency identification (RFID) system including an RFID tag configured to store electronic product code (EPC) data, an RFID device configured to write data to the RFID tag associated with an item, a controlling computer, a database, a plurality of cloud applications, and a serialization service configured to run on a cloud network, the method comprising: requesting, by the controlling computer from the serialization service, first data comprising a unique identifier for the RFID tag, wherein the unique identifier is unique across the plurality of cloud applications; receiving, by the controlling computer from the serialization service, the first data; controlling, by the controlling computer, the RFID device to write the first data to the RFID tag; writing, using the RFID device, the first data to the RFID tag; and storing, in the database, an association between the first data of the RFID tag and a second data of the item. 24. The method of claim 23, wherein the cloud network includes a server on which runs a device application program interface (API), and wherein the device API is configured to receive the first data from the RFID tag and to manage the database. 25. The method of claim 24, wherein the device API is configured to store the first data and the second data in the database. 26. The method of claim 25, wherein the RFID system is run on a local access network (LAN) such that communications during an operation of the RFID system are prevented from passing outside an organization of an end user. 27. The method of claim 26, wherein the cloud server and the runtime system are operated on a premise of the organization of the end user. 28. The method of claim 27, wherein the RFID system is not run across the Internet. 29. The method of claim 25, wherein the server is configured to provide a cloud application API for use with an endpoint application. 30. The method of claim 23, wherein the first data and the second data correspond to a manufacturer of the item associated with the RFID tag. 31. The method of claim 25, wherein the API is configured to provide information based on a request from an endpoint application that includes the first data or the second data. 32. The method of claim 31, wherein the provided information corresponding to the first data or the second data is stored in the database. 33. A radio frequency identification (RFID) system comprising: a serialization service operating on a cloud network, wherein the serialization service is configured to generate unique electronic product code (EPC) information; a controlling computer operatively coupled to an RFID device and the cloud network, wherein the RFID device is not cloud-enabled, and wherein the controlling computer is configured to: communicate with the cloud network; communicate with the RFID device; receive a request from the cloud network to program the unique EPC information with the RFID device; and interact with the RFID device based on the request so as to cause the RFID device to program an RFID tag with the unique EPC information. 34. The RFID system of claim 33, wherein the cloud network is configured to store and track RFID tag information and associated item information. 35. The RFID system of claim 33, wherein the controlling computer is configured to send status information about the RFID device to the cloud network, and wherein the status information is not in response to the request from the cloud network. 36. The RFID system of claim 33, wherein the controlling computer is configured to push data from the RFID device to the cloud network. 37. The RFID system of claim 36, wherein the controlling computer is configured to push data from the RFID device to the cloud network without receiving a request from the cloud network. 38. The RFID system of claim 36, wherein the controlling computer is configured to push data from the RFID device to the cloud network without receiving a request from an endpoint application. 39. The RFID system of claim 36, wherein the controlling computer is configured to generate a message indicating a health of the RFID device. 40. The RFID system of claim 33, wherein the controlling computer is configured to control when an RFID scan occurs. 41. The RFID system of claim 33, wherein the controlling computer is configured to control what RFID scan settings are used for an RFID scan. 42. The RFID system of claim 33, wherein the controlling computer performs security authentication with the cloud network.
A multi-tenant, RFID system that may be Cloud based or run on a local area network (LAN) for distributed RFID devices and RFID applications. The RFID system a central abstraction and translation layer between RFID devices installed in geographically diverse locations and applications. RFID devices initiate communication to a Cloud or LAN network over to send events and receive commands. RFID applications can receive RFID tag data, device health, and requested and derived events from the RFID system to automatically run processes based on the provided data. Applications manage RFID devices and settings in the RFID system using command and configuration interfaces.1-22. (canceled) 23. A method for operating a radio frequency identification (RFID) system including an RFID tag configured to store electronic product code (EPC) data, an RFID device configured to write data to the RFID tag associated with an item, a controlling computer, a database, a plurality of cloud applications, and a serialization service configured to run on a cloud network, the method comprising: requesting, by the controlling computer from the serialization service, first data comprising a unique identifier for the RFID tag, wherein the unique identifier is unique across the plurality of cloud applications; receiving, by the controlling computer from the serialization service, the first data; controlling, by the controlling computer, the RFID device to write the first data to the RFID tag; writing, using the RFID device, the first data to the RFID tag; and storing, in the database, an association between the first data of the RFID tag and a second data of the item. 24. The method of claim 23, wherein the cloud network includes a server on which runs a device application program interface (API), and wherein the device API is configured to receive the first data from the RFID tag and to manage the database. 25. The method of claim 24, wherein the device API is configured to store the first data and the second data in the database. 26. The method of claim 25, wherein the RFID system is run on a local access network (LAN) such that communications during an operation of the RFID system are prevented from passing outside an organization of an end user. 27. The method of claim 26, wherein the cloud server and the runtime system are operated on a premise of the organization of the end user. 28. The method of claim 27, wherein the RFID system is not run across the Internet. 29. The method of claim 25, wherein the server is configured to provide a cloud application API for use with an endpoint application. 30. The method of claim 23, wherein the first data and the second data correspond to a manufacturer of the item associated with the RFID tag. 31. The method of claim 25, wherein the API is configured to provide information based on a request from an endpoint application that includes the first data or the second data. 32. The method of claim 31, wherein the provided information corresponding to the first data or the second data is stored in the database. 33. A radio frequency identification (RFID) system comprising: a serialization service operating on a cloud network, wherein the serialization service is configured to generate unique electronic product code (EPC) information; a controlling computer operatively coupled to an RFID device and the cloud network, wherein the RFID device is not cloud-enabled, and wherein the controlling computer is configured to: communicate with the cloud network; communicate with the RFID device; receive a request from the cloud network to program the unique EPC information with the RFID device; and interact with the RFID device based on the request so as to cause the RFID device to program an RFID tag with the unique EPC information. 34. The RFID system of claim 33, wherein the cloud network is configured to store and track RFID tag information and associated item information. 35. The RFID system of claim 33, wherein the controlling computer is configured to send status information about the RFID device to the cloud network, and wherein the status information is not in response to the request from the cloud network. 36. The RFID system of claim 33, wherein the controlling computer is configured to push data from the RFID device to the cloud network. 37. The RFID system of claim 36, wherein the controlling computer is configured to push data from the RFID device to the cloud network without receiving a request from the cloud network. 38. The RFID system of claim 36, wherein the controlling computer is configured to push data from the RFID device to the cloud network without receiving a request from an endpoint application. 39. The RFID system of claim 36, wherein the controlling computer is configured to generate a message indicating a health of the RFID device. 40. The RFID system of claim 33, wherein the controlling computer is configured to control when an RFID scan occurs. 41. The RFID system of claim 33, wherein the controlling computer is configured to control what RFID scan settings are used for an RFID scan. 42. The RFID system of claim 33, wherein the controlling computer performs security authentication with the cloud network.
2,600
10,258
10,258
15,945,286
2,643
Systems and methods for passive radio enabled power gating for a body mountable device are disclosed. In one embodiment, a system for power gating includes: a power supply; a Near Field Communication (NFC) antenna to receive NFC signals; a Radio Frequency (RF) rectifier electrically coupled to the NFC antenna to generate a current based on a signal received from the NFC antenna; and an electronic switch coupled between the power supply and a sensor, wherein the RF rectifier is further coupled to the switch to apply the current to the switch to change a state of the switch.
1. A wearable glucose sensor device comprising: a power supply; a first antenna; a radio frequency (RF) rectifier electrically coupled to the first antenna to generate a current based on a first signal received from the first antenna; a sensor terminal adapted to electrically couple to a sensor; a memory configured to store data received via the sensor terminal; an electronic switch coupled between the power supply and the sensor to control electrical connection between the power supply and the sensor, wherein the RF rectifier is further coupled to the electronic switch to apply the current to the electronic switch to change a state of the electronic switch; and a network interface coupled to the memory, the network interface activated by a second signal to transmit data stored in the memory to a remote device. 2. The wearable glucose sensor device of claim 1, wherein the electronic switch comprises one or more of: a transistor, a relay, or an electronically controlled switch. 3. The wearable glucose sensor device of claim 1, wherein the power supply comprises a battery. 4. The wearable glucose sensor device of claim 1, further comprising a second antenna; and wherein the network interface is configured to receive the second signal from the second antenna. 5. The wearable glucose sensor device of claim 1, wherein the electronic switch is further coupled between the power supply and a processor; and wherein the processor is coupled to the sensor terminal to receive sensor data from the sensor when the electronic switch is closed, and to the memory to store the sensor data in the memory. 6. The wearable glucose sensor device of claim 5, wherein the processor receives a second current from the power supply when the electronic switch is closed. 7. The wearable glucose sensor device of claim 1, wherein the network interface is coupled to the first antenna to transmit the data stored in the memory. 8. The wearable glucose sensor device of claim 1, further comprising a second antenna and wherein the network interface is coupled to the second antenna to transmit the data stored in the memory using the second antenna, and wherein the second antenna is configured to transmit data using a different protocol than the first antenna. 9. The wearable glucose sensor device of claim 1, wherein the network interface is configured to receive the second signal via the first antenna. 10. A system for power gating for a wearable analyte sensor device comprising: a first antenna; a radio frequency (RF) rectifier electrically coupled to the first antenna to generate a current based on a first signal received from the first antenna; an analyte sensor; a memory configured to store data received from the analyte sensor; an electronic switch configured to control power to the analyte sensor, wherein the RF rectifier is coupled to the electronic switch and configured to apply the current to change a state of the electronic switch; and an NFC-based network interface coupled to the memory, the NFC-based network interface activated by a second signal to transmit data stored in the memory to a remote device. 11. The system of claim 10, wherein the electronic switch comprises one or more of: a transistor, a relay, or an electronically controlled switch. 12. The system of claim 10, further comprising a second antenna; and wherein the NFC-based network interface is configured to receive the second signal from the second antenna. 13. The system of claim 12, wherein the wearable analyte sensor device is configured to transmit data using the first antenna. 14. The system of claim 13, further comprising a second antenna and wherein the wearable analyte sensor device is configured to transmit data using the second antenna, and wherein the second antenna is configured to transmit data using a different protocol than the first antenna. 15. A wearable sensor device comprising: a first antenna; a radio frequency (RF) rectifier electrically coupled to the first antenna to generate a current based on a first signal received from the first antenna; a blood analyte sensor; a memory coupled to the blood analyte sensor; a network interface coupled to the memory; a processor coupled to the blood analyte sensor, the processor to: store sensor data received from the blood analyte sensor in the memory; upon receipt of a second signal via the network interface, transmit the stored sensor data to a remote device using the network interface; and an electronic switch configured to control power to the blood analyte sensor, wherein the RF rectifier is coupled to the electronic switch and configured to apply the current to change a state of the electronic switch. 16. The wearable sensor device of claim 15, wherein the electronic switch comprises one or more of: a transistor, a relay, or an electronically controlled switch. 17. The wearable sensor device of claim 15, wherein the network interface is configured to receive the second signal from the first antenna. 18. The wearable sensor device of claim 15, wherein the processor is configured to receive sensor data from the blood analyte sensor when the electronic switch is closed, and configured to store the sensor data in a memory. 19. The wearable sensor device of claim 15, wherein the processor is coupled to the first antenna to transmit data using the first antenna. 20. The wearable sensor device of claim 15, further comprising a second antenna and wherein the processor is coupled to the second antenna to receive the second signal and to transmit data using the second antenna, and wherein the second antenna is configured to transmit data using a different protocol than the first antenna.
Systems and methods for passive radio enabled power gating for a body mountable device are disclosed. In one embodiment, a system for power gating includes: a power supply; a Near Field Communication (NFC) antenna to receive NFC signals; a Radio Frequency (RF) rectifier electrically coupled to the NFC antenna to generate a current based on a signal received from the NFC antenna; and an electronic switch coupled between the power supply and a sensor, wherein the RF rectifier is further coupled to the switch to apply the current to the switch to change a state of the switch.1. A wearable glucose sensor device comprising: a power supply; a first antenna; a radio frequency (RF) rectifier electrically coupled to the first antenna to generate a current based on a first signal received from the first antenna; a sensor terminal adapted to electrically couple to a sensor; a memory configured to store data received via the sensor terminal; an electronic switch coupled between the power supply and the sensor to control electrical connection between the power supply and the sensor, wherein the RF rectifier is further coupled to the electronic switch to apply the current to the electronic switch to change a state of the electronic switch; and a network interface coupled to the memory, the network interface activated by a second signal to transmit data stored in the memory to a remote device. 2. The wearable glucose sensor device of claim 1, wherein the electronic switch comprises one or more of: a transistor, a relay, or an electronically controlled switch. 3. The wearable glucose sensor device of claim 1, wherein the power supply comprises a battery. 4. The wearable glucose sensor device of claim 1, further comprising a second antenna; and wherein the network interface is configured to receive the second signal from the second antenna. 5. The wearable glucose sensor device of claim 1, wherein the electronic switch is further coupled between the power supply and a processor; and wherein the processor is coupled to the sensor terminal to receive sensor data from the sensor when the electronic switch is closed, and to the memory to store the sensor data in the memory. 6. The wearable glucose sensor device of claim 5, wherein the processor receives a second current from the power supply when the electronic switch is closed. 7. The wearable glucose sensor device of claim 1, wherein the network interface is coupled to the first antenna to transmit the data stored in the memory. 8. The wearable glucose sensor device of claim 1, further comprising a second antenna and wherein the network interface is coupled to the second antenna to transmit the data stored in the memory using the second antenna, and wherein the second antenna is configured to transmit data using a different protocol than the first antenna. 9. The wearable glucose sensor device of claim 1, wherein the network interface is configured to receive the second signal via the first antenna. 10. A system for power gating for a wearable analyte sensor device comprising: a first antenna; a radio frequency (RF) rectifier electrically coupled to the first antenna to generate a current based on a first signal received from the first antenna; an analyte sensor; a memory configured to store data received from the analyte sensor; an electronic switch configured to control power to the analyte sensor, wherein the RF rectifier is coupled to the electronic switch and configured to apply the current to change a state of the electronic switch; and an NFC-based network interface coupled to the memory, the NFC-based network interface activated by a second signal to transmit data stored in the memory to a remote device. 11. The system of claim 10, wherein the electronic switch comprises one or more of: a transistor, a relay, or an electronically controlled switch. 12. The system of claim 10, further comprising a second antenna; and wherein the NFC-based network interface is configured to receive the second signal from the second antenna. 13. The system of claim 12, wherein the wearable analyte sensor device is configured to transmit data using the first antenna. 14. The system of claim 13, further comprising a second antenna and wherein the wearable analyte sensor device is configured to transmit data using the second antenna, and wherein the second antenna is configured to transmit data using a different protocol than the first antenna. 15. A wearable sensor device comprising: a first antenna; a radio frequency (RF) rectifier electrically coupled to the first antenna to generate a current based on a first signal received from the first antenna; a blood analyte sensor; a memory coupled to the blood analyte sensor; a network interface coupled to the memory; a processor coupled to the blood analyte sensor, the processor to: store sensor data received from the blood analyte sensor in the memory; upon receipt of a second signal via the network interface, transmit the stored sensor data to a remote device using the network interface; and an electronic switch configured to control power to the blood analyte sensor, wherein the RF rectifier is coupled to the electronic switch and configured to apply the current to change a state of the electronic switch. 16. The wearable sensor device of claim 15, wherein the electronic switch comprises one or more of: a transistor, a relay, or an electronically controlled switch. 17. The wearable sensor device of claim 15, wherein the network interface is configured to receive the second signal from the first antenna. 18. The wearable sensor device of claim 15, wherein the processor is configured to receive sensor data from the blood analyte sensor when the electronic switch is closed, and configured to store the sensor data in a memory. 19. The wearable sensor device of claim 15, wherein the processor is coupled to the first antenna to transmit data using the first antenna. 20. The wearable sensor device of claim 15, further comprising a second antenna and wherein the processor is coupled to the second antenna to receive the second signal and to transmit data using the second antenna, and wherein the second antenna is configured to transmit data using a different protocol than the first antenna.
2,600
10,259
10,259
14,923,201
2,667
One embodiment provides a method for capturing information from a measuring instrument, including: capturing, using an image capture device, an image of information displayed on the measuring instrument; analyzing, using a processor, the information; detecting, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; extracting, using a processor, the plurality of elements from the image; and storing, in a storage device, the plurality of elements in a formatted file. Other aspects are described and claimed.
1. A method for capturing information from a measuring instrument, comprising: capturing, using an image capture device, an image of information displayed on the measuring instrument; analyzing, using a processor, the information; detecting, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; extracting, using a processor, the plurality of elements from the image; and storing, in a storage device, the plurality of elements in a formatted file. 2. The method of claim 1, wherein the measurement data is coded within the tag element. 3. The method of claim 2, wherein the measurement data coded within the tag is updated based on at least one of: user input and a predetermined interval. 4. The method of claim 1, wherein the measurement data comprises at least one character, wherein the at least one character comprises at least one of: a letter, a number, and a symbol. 5. The method of claim 4, further comprising generating, using optical character recognition, machine text based on the at least one character. 6. The method of claim 1, wherein the storage device comprises a remote storage device. 7. The method of claim 1, further comprising displaying, on a display device, the identification data and the measurement data. 8. The method of claim 1, further comprising responsive to a user input, transferring, using a network connection device, the identification data and the measurement data to a remote storage device. 9. The method of claim 1, further comprising: detecting, using a processor, a location of the image capture device. 10. The method of claims 9, further comprising: transferring, using a network connection device, location data associated with the location of the image capture device. 11. An information handling device for capturing information from a measuring instrument, comprising: a processor; an image capture device; a memory device that stores instructions executable by the processor to: capture, using the image capture device, an image of information displayed on the measuring instrument; analyze, using the processor, the information; detect, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; extracting, using the processor, the plurality of elements from the image; and storing, in a storage device, the plurality of elements in a formatted file. 12. The information handling device of claim 11, wherein the measurement data is coded within the tag element. 13. The information handling device of claim 12, wherein the measurement data coded within the tag is updated based on at least one of: user input and a predetermined interval. 14. The information handling device of claim 11, wherein the measurement data comprises at least one character, wherein the at least one character comprises at least one of: a letter, a number, and a symbol. 15. The information handling device of claim 14, wherein the instructions are further executable by the processor to generate, using optical character recognition, machine text based on the at least one character. 16. The information handling device of claim 11, wherein the storage device comprises at a remote storage device. 17. The information handling device of claim 11, wherein the instructions are further executable by the processor to: display, on a display device, the identification data and the measurement data. 18. The information handling device of claim 11, wherein the instructions are further executable by the processor to: responsive to a user input, transfer, using a network connection device, the identification data and the measurement data to a remote storage device. 19. The information handling device of claims 11, further comprising a network connection device, wherein the instructions are further executable by the processor to: transfer, using the network connection device, location data associated with the location of the image capture device. 20. A product for capturing analysis information from a measuring instrument, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that captures, using an image capture device, an image of information displayed on the measuring instrument; code that analyzes the information; code that detects, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; code that extracts the plurality of elements from the image; and code that stores, in a storage device, the plurality of elements in a formatted file.
One embodiment provides a method for capturing information from a measuring instrument, including: capturing, using an image capture device, an image of information displayed on the measuring instrument; analyzing, using a processor, the information; detecting, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; extracting, using a processor, the plurality of elements from the image; and storing, in a storage device, the plurality of elements in a formatted file. Other aspects are described and claimed.1. A method for capturing information from a measuring instrument, comprising: capturing, using an image capture device, an image of information displayed on the measuring instrument; analyzing, using a processor, the information; detecting, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; extracting, using a processor, the plurality of elements from the image; and storing, in a storage device, the plurality of elements in a formatted file. 2. The method of claim 1, wherein the measurement data is coded within the tag element. 3. The method of claim 2, wherein the measurement data coded within the tag is updated based on at least one of: user input and a predetermined interval. 4. The method of claim 1, wherein the measurement data comprises at least one character, wherein the at least one character comprises at least one of: a letter, a number, and a symbol. 5. The method of claim 4, further comprising generating, using optical character recognition, machine text based on the at least one character. 6. The method of claim 1, wherein the storage device comprises a remote storage device. 7. The method of claim 1, further comprising displaying, on a display device, the identification data and the measurement data. 8. The method of claim 1, further comprising responsive to a user input, transferring, using a network connection device, the identification data and the measurement data to a remote storage device. 9. The method of claim 1, further comprising: detecting, using a processor, a location of the image capture device. 10. The method of claims 9, further comprising: transferring, using a network connection device, location data associated with the location of the image capture device. 11. An information handling device for capturing information from a measuring instrument, comprising: a processor; an image capture device; a memory device that stores instructions executable by the processor to: capture, using the image capture device, an image of information displayed on the measuring instrument; analyze, using the processor, the information; detect, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; extracting, using the processor, the plurality of elements from the image; and storing, in a storage device, the plurality of elements in a formatted file. 12. The information handling device of claim 11, wherein the measurement data is coded within the tag element. 13. The information handling device of claim 12, wherein the measurement data coded within the tag is updated based on at least one of: user input and a predetermined interval. 14. The information handling device of claim 11, wherein the measurement data comprises at least one character, wherein the at least one character comprises at least one of: a letter, a number, and a symbol. 15. The information handling device of claim 14, wherein the instructions are further executable by the processor to generate, using optical character recognition, machine text based on the at least one character. 16. The information handling device of claim 11, wherein the storage device comprises at a remote storage device. 17. The information handling device of claim 11, wherein the instructions are further executable by the processor to: display, on a display device, the identification data and the measurement data. 18. The information handling device of claim 11, wherein the instructions are further executable by the processor to: responsive to a user input, transfer, using a network connection device, the identification data and the measurement data to a remote storage device. 19. The information handling device of claims 11, further comprising a network connection device, wherein the instructions are further executable by the processor to: transfer, using the network connection device, location data associated with the location of the image capture device. 20. A product for capturing analysis information from a measuring instrument, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that captures, using an image capture device, an image of information displayed on the measuring instrument; code that analyzes the information; code that detects, based on the analyzing, a plurality of elements within the information; wherein the plurality of elements comprise: identification data associated with the measuring instrument and measurement data, the identification data being coded within a tag element; wherein the tag element is at least one of a quick response code, a two dimensional barcode, a barcode, and a service identification tag; the identification data comprising data relating to at least one of: unit type, notation type, device type, device identification, device location, device vendor, device manufacturer, and current user; code that extracts the plurality of elements from the image; and code that stores, in a storage device, the plurality of elements in a formatted file.
2,600
10,260
10,260
15,624,613
2,665
Embodiments of the present invention include a system and method for wirelessly identifying and validating an electronic device in order to initiate a communication process with another device or a service. In an embodiment, the system includes a portable biometric monitoring device that is identified by a client device or a server for the purpose of initiating a pairing process. In an embodiment, pairing implies pairing the portable device to an online user account with minimal user interaction. After pairing, the portable device and appropriate client devices and servers communicate with little or no user interaction, for example to upload sensor data collected by the portable device.
1. A portable monitoring system, comprising: a first device configurable to communicate wirelessly with a second device, wherein communication comprises, the first device discovering the second device by wireless communication; the first device and the second device communicating to start a pairing attempt; the first device and the second device exchanging data to complete pairing, wherein at least one of the first device and the second device comprises a portable monitoring device comprising, wireless transmitter circuitry; wireless receiver circuitry; a plurality of sensors configurable to sense a plurality of physical phenomena; user interface means for communicating to a user, wherein the user interface means comprises one or more of a screen, a touch screen, a vibramotor, a keyboard, light emitting diodes (LEDs), and buttons, wherein communicating with the user comprises cueing the user to validate a request to pair, and wherein a user response to the cue comprises the user tapping the first device anywhere on its surface; and processing circuitry configurable to interpret data from the plurality of sensors. 2. The system of claim 1, wherein the plurality of sensor comprises a motion sensor comprising at least one of an accelerometer, a gyroscope sensor, a microphone, an altimeter, and a magnetometer, and wherein the user response to the cue is detectable by the motion sensor. 3. The system of claim 1, further comprising a client device configurable to communicate with the portable monitoring device on behalf of a server, and to communicate data, from the portable monitoring device to the server. 4. The system of claim 3, wherein the client device comprises a client software application. 5. The system of claim 3, wherein the server device comprises a mobile phone. 6. The system of claim 1, wherein pairing comprises pairing the portable monitoring device with a web based user account accessible through a web site as part of a setup process, wherein user data, including data entered by the user using the web site, is downloaded to the portable monitoring device. 7. The system of claim 2, wherein the processing circuitry is further configurable to process data from the plurality of sensors to generate a plurality of biometric data based on user physical activity, and wherein after pairing, the biometric data is automatically uploaded to the server when the device is in wireless communication range with one or more of the server and a client acting on behalf of the server. 8. The system of claim 1, wherein the plurality of sensors comprises at least one of an audio sensor, an accelerometer, altimeter, photoplethysmograph, magnetometer, Global Positioning System sensor, thermometer, and a gyroscope. 9. The system of claim 8, wherein the processor operates on data from the plurality of sensors to generate one or more of number of steps taken, amount of elevation gained, and distance traversed. 10. The system of claim 8, wherein cueing comprises causing the device to perform at least one of vibrating, illuminating, making a sound, and displaying a message. 11. A portable monitoring device, comprising; a user interface comprising at least one of, a speaker; a vibramotor; motion sensors; gesture recognition sensors; a touch screen; a microphone; and buttons; transmitter circuitry and receiver circuitry; a plurality of sensors configurable to sense physical phenomena; processing circuitry coupled to the plurality of sensors to receive sensor data and calculate metrics associated with a user, the processing circuitry configurable to communicate with another device and with a user, wherein communicating comprises, requesting to pair with the other device; cueing the user to validate the request to pair; and receiving input from the user to validate the pairing request, wherein input comprises tapping the device on any part of its exterior wherein the tapping is detected by a motion sensor. 12. The portable monitoring device of claim 11, wherein the metrics comprise at least one of: sleep activity; step count; calorie burn; distance traversed; speed; and heart rate. 13. The device of claim 11, wherein communicating further comprises responding to the user input by completing the pairing process. 14. The device of claim 11, wherein communicating with the client device further comprises, in response to the client device discovering more than one portable monitoring device in proximity, the client device requesting the user to move one of the portable monitoring devices closer to the client for pairing.
Embodiments of the present invention include a system and method for wirelessly identifying and validating an electronic device in order to initiate a communication process with another device or a service. In an embodiment, the system includes a portable biometric monitoring device that is identified by a client device or a server for the purpose of initiating a pairing process. In an embodiment, pairing implies pairing the portable device to an online user account with minimal user interaction. After pairing, the portable device and appropriate client devices and servers communicate with little or no user interaction, for example to upload sensor data collected by the portable device.1. A portable monitoring system, comprising: a first device configurable to communicate wirelessly with a second device, wherein communication comprises, the first device discovering the second device by wireless communication; the first device and the second device communicating to start a pairing attempt; the first device and the second device exchanging data to complete pairing, wherein at least one of the first device and the second device comprises a portable monitoring device comprising, wireless transmitter circuitry; wireless receiver circuitry; a plurality of sensors configurable to sense a plurality of physical phenomena; user interface means for communicating to a user, wherein the user interface means comprises one or more of a screen, a touch screen, a vibramotor, a keyboard, light emitting diodes (LEDs), and buttons, wherein communicating with the user comprises cueing the user to validate a request to pair, and wherein a user response to the cue comprises the user tapping the first device anywhere on its surface; and processing circuitry configurable to interpret data from the plurality of sensors. 2. The system of claim 1, wherein the plurality of sensor comprises a motion sensor comprising at least one of an accelerometer, a gyroscope sensor, a microphone, an altimeter, and a magnetometer, and wherein the user response to the cue is detectable by the motion sensor. 3. The system of claim 1, further comprising a client device configurable to communicate with the portable monitoring device on behalf of a server, and to communicate data, from the portable monitoring device to the server. 4. The system of claim 3, wherein the client device comprises a client software application. 5. The system of claim 3, wherein the server device comprises a mobile phone. 6. The system of claim 1, wherein pairing comprises pairing the portable monitoring device with a web based user account accessible through a web site as part of a setup process, wherein user data, including data entered by the user using the web site, is downloaded to the portable monitoring device. 7. The system of claim 2, wherein the processing circuitry is further configurable to process data from the plurality of sensors to generate a plurality of biometric data based on user physical activity, and wherein after pairing, the biometric data is automatically uploaded to the server when the device is in wireless communication range with one or more of the server and a client acting on behalf of the server. 8. The system of claim 1, wherein the plurality of sensors comprises at least one of an audio sensor, an accelerometer, altimeter, photoplethysmograph, magnetometer, Global Positioning System sensor, thermometer, and a gyroscope. 9. The system of claim 8, wherein the processor operates on data from the plurality of sensors to generate one or more of number of steps taken, amount of elevation gained, and distance traversed. 10. The system of claim 8, wherein cueing comprises causing the device to perform at least one of vibrating, illuminating, making a sound, and displaying a message. 11. A portable monitoring device, comprising; a user interface comprising at least one of, a speaker; a vibramotor; motion sensors; gesture recognition sensors; a touch screen; a microphone; and buttons; transmitter circuitry and receiver circuitry; a plurality of sensors configurable to sense physical phenomena; processing circuitry coupled to the plurality of sensors to receive sensor data and calculate metrics associated with a user, the processing circuitry configurable to communicate with another device and with a user, wherein communicating comprises, requesting to pair with the other device; cueing the user to validate the request to pair; and receiving input from the user to validate the pairing request, wherein input comprises tapping the device on any part of its exterior wherein the tapping is detected by a motion sensor. 12. The portable monitoring device of claim 11, wherein the metrics comprise at least one of: sleep activity; step count; calorie burn; distance traversed; speed; and heart rate. 13. The device of claim 11, wherein communicating further comprises responding to the user input by completing the pairing process. 14. The device of claim 11, wherein communicating with the client device further comprises, in response to the client device discovering more than one portable monitoring device in proximity, the client device requesting the user to move one of the portable monitoring devices closer to the client for pairing.
2,600
10,261
10,261
15,695,856
2,613
A graphics adjustment system detects the video resolution of digital video to be output by a receiving device and saves the graphics settings input by the user when the user adjusts the graphics settings on the receiving device such that the digital video being presented in the presentation device is not cut off due to overscanning. The system saves the graphics adjustment settings as the setting to use going forward for digital video of that same resolution for that particular presentation device. In this manner, the digital video output from the receiving device will not be cut off when presented on the presentation device, even when the receiving device is switching between receiving digital video programming of different resolutions from various program distributors and/or the content providers.
1. A method for video graphics adjustment comprising: detecting, by at least one processor of a receiving device, a resolution of digital video content to be output from the receiving device to a display; in response to the detecting the resolution of the digital video content to be output from the receiving device to the display, retrieving, by at least one processor of the receiving device, based on the detected resolution of the digital video content, a previously stored video graphics adjustment setting having a previously stored association with the detected resolution of the digital video content, the previously stored association being that application of the video graphics adjustment setting to digital video content having the detected resolution avoids edges of digital video frames of digital video content having the detected resolution being cut of when presented on the display; and in response to the at least one processor of the receiving device retrieving the previously stored video graphics adjustment setting having the previously stored association with the detected resolution of the digital video content to be output from the receiving device, applying, by at least one processor of the receiving device, the previously stored video graphics adjustment setting to the digital video content to be output from the receiving device before the digital video content to be output from the receiving device is presented on the display. 2. The method of claim 1 further comprising: before the detecting the resolution of the digital video content to be output from the receiving device to a display, outputting, by at least one processor of the receiving device, from the receiving device for presentation on the display, first different digital video content having the resolution, wherein edges of digital video frames of the first different digital video content having the resolution are cut off when presented on the display; receiving, by at least one processor of the receiving device, input from a user indicating the video graphics adjustment setting; applying, by at least one processor of the receiving device, the video graphics adjustment setting to the first different digital video content having the resolution, wherein the video graphics adjustment setting being applied to the first different digital video content having the resolution avoids the digital video frames of the first different digital video content having the resolution being cut off when presented on the display; associating, by the by at least one processor of the receiving device, the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display; and storing, by the by at least one processor of the receiving device, for future retrieval by the receiving device, the video graphics adjustment setting and the association of the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display. 3. The method of claim 2 further comprising: before the detecting the resolution of the digital video content to be output from the receiving device to a display, outputting, by at least one processor of the receiving device, from the receiving device for presentation on the display, second different digital video content having a different resolution, wherein edges of digital video frames of the second different digital video content having the different resolution are cut off when presented on the display; receiving, by at least one processor of the receiving device, input from a user indicating a different video graphics adjustment setting; applying, by at least one processor of the receiving device, the different video graphics adjustment setting to the second different digital video content having the different resolution, wherein the different video graphics adjustment setting being applied to the second different digital video content having the different resolution avoids the digital video frames of the second different digital video content having the different resolution being cut off when presented on the display; associating, by the by at least one processor of the receiving device, the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display; and storing, by the by at least one processor of the receiving device, for future retrieval by the receiving device, the different video graphics adjustment setting and the association of the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display. 4. The method of claim 3 wherein the resolution is a high definition (HD) digital video resolution and the different resolution is a video resolution higher than HD resolution. 5. The method of claim 4 wherein the HD digital video resolution is at least 1920 pixels×1080 lines and the different resolution is at least 3840 pixels×2160 lines. 6. The method of claim 1 wherein the video graphics adjustment setting includes a resizing of the digital video content having the detected resolution. 7. The method of claim 6 wherein the resizing of the digital video content having the detected resolution includes downscaling of digital video frames of the digital video content. 8. The method of claim 1 wherein the video graphics adjustment setting is includes a repositioning of frames of the digital video content on the display. 9. A system for video graphics adjustment comprising: at least one processor; at least one memory in communication with the at least one processor, the at least one memory having computer-executable instructions stored thereon that, when executed, cause the at least one processor to: output for presentation on a display digital video content for which edges of digital video frames of the digital video content are not cut off when presented on the display; output, for presentation on the display, first different digital video content having a resolution, wherein edges of digital video frames of the first different digital video content having the resolution are cut off when presented on the display; receive input from a user indicating a video graphics adjustment setting; apply the video graphics adjustment setting to the first different digital video content having the resolution, wherein the video graphics adjustment setting being applied to the first different digital video content having the resolution avoids the digital video frames of the first different digital video content having the resolution being cut off when presented on the display; associate the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display; and store, for future retrieval by a receiving device, the video graphics adjustment setting and the association of the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display. 10. The system of claim 9, wherein the computer-executable instructions, when executed, further cause the at least one processor to: output second different digital video content having a different resolution, wherein edges of digital video frames of the second different digital video content having the different resolution are cut off when presented on the display; receive input from a user indicating a different video graphics adjustment setting; apply the different video graphics adjustment setting to the second different digital video content having the different resolution, wherein the different video graphics adjustment setting being applied to the second different digital video content having the different resolution avoids the digital video frames of the second different digital video content having the different resolution being cut off when presented on the display; associate the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display; and store, for future retrieval by a receiving device, the different video graphics adjustment setting and the association of the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display. 11. The system of claim 10, wherein the computer-executable instructions, when executed, further cause the at least one processor to: detect a resolution of third different digital video content to be output from the receiving device to the display; in response to the detecting the resolution of the third different digital video content to be output from the receiving device to the display, search for a previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content to be output from the receiving device to the display; retrieve the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content to be output from the receiving device to the display, the association with the detected resolution of the third different digital video content being that application of the video graphics adjustment setting associated with the detected resolution of the third different digital video content to digital video content having the detected resolution of the third different digital video content avoids edges of digital video frames of digital video content having the detected resolution of the third different digital video content being cut of when presented on the display; and in response to the retrieving the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content, apply the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content to the third different digital video content to be output from the receiving device before the third different digital video content to be output from the receiving device is presented on the display. 12. The system of claim 11, wherein the detected resolution of the third different digital video content is different than the resolution of the first different digital video content and the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content is different than the previously stored video graphics adjustment setting associated with the resolution of the first different digital video content that was output to the display. 13. The system of claim 10 wherein the resolution of the first different digital video content is a high definition (HD) digital video resolution and the different resolution of the second different digital video content is a video resolution higher than HD resolution. 14. The system of claim 13 wherein the HD digital video resolution is at least 1920 pixels×1080 lines and the different resolution is at least 3840 pixels×2160 lines. 15. The system of claim 9 wherein the video graphics adjustment setting includes a resizing of the digital video content having the detected resolution. 16. The system of claim 15 wherein the resizing of the digital video content having the detected resolution includes downscaling of digital video frames of the digital video content. 17. The system of claim 9 wherein the video graphics adjustment setting is includes a repositioning of frames of the digital video content on the display. 18. A non-transitory computer-readable medium having computer-executable instructions stored thereon that, when executed, cause at least one processor to: detect a plurality of different resolutions of each different digital video content of a plurality of different digital video content to be output from a receiving device to a display; and for each detected different resolution of each different digital video content: search for a corresponding previously stored video graphics adjustment setting associated with the detected different resolution, the association with the detected different resolution being that application of the corresponding previously stored video graphics adjustment setting to digital video content having the detected different resolution avoids edges of digital video frames of digital video content having the detected different resolution being cut of when presented on the display; retrieve the corresponding previously stored video graphics adjustment setting; and in response to the retrieving the corresponding previously stored video graphics adjustment setting, apply the corresponding previously stored video graphics adjustment setting to the different digital video content to be output from the receiving device. 19. The non-transitory computer-readable medium of claim 18 wherein the corresponding previously stored video graphics adjustment setting includes a resizing of the different digital video content to be output from the receiving device. 20. The non-transitory computer-readable medium of claim 19 wherein the resizing of the different digital video content to be output from the receiving device includes downscaling of digital video frames of the different digital video content.
A graphics adjustment system detects the video resolution of digital video to be output by a receiving device and saves the graphics settings input by the user when the user adjusts the graphics settings on the receiving device such that the digital video being presented in the presentation device is not cut off due to overscanning. The system saves the graphics adjustment settings as the setting to use going forward for digital video of that same resolution for that particular presentation device. In this manner, the digital video output from the receiving device will not be cut off when presented on the presentation device, even when the receiving device is switching between receiving digital video programming of different resolutions from various program distributors and/or the content providers.1. A method for video graphics adjustment comprising: detecting, by at least one processor of a receiving device, a resolution of digital video content to be output from the receiving device to a display; in response to the detecting the resolution of the digital video content to be output from the receiving device to the display, retrieving, by at least one processor of the receiving device, based on the detected resolution of the digital video content, a previously stored video graphics adjustment setting having a previously stored association with the detected resolution of the digital video content, the previously stored association being that application of the video graphics adjustment setting to digital video content having the detected resolution avoids edges of digital video frames of digital video content having the detected resolution being cut of when presented on the display; and in response to the at least one processor of the receiving device retrieving the previously stored video graphics adjustment setting having the previously stored association with the detected resolution of the digital video content to be output from the receiving device, applying, by at least one processor of the receiving device, the previously stored video graphics adjustment setting to the digital video content to be output from the receiving device before the digital video content to be output from the receiving device is presented on the display. 2. The method of claim 1 further comprising: before the detecting the resolution of the digital video content to be output from the receiving device to a display, outputting, by at least one processor of the receiving device, from the receiving device for presentation on the display, first different digital video content having the resolution, wherein edges of digital video frames of the first different digital video content having the resolution are cut off when presented on the display; receiving, by at least one processor of the receiving device, input from a user indicating the video graphics adjustment setting; applying, by at least one processor of the receiving device, the video graphics adjustment setting to the first different digital video content having the resolution, wherein the video graphics adjustment setting being applied to the first different digital video content having the resolution avoids the digital video frames of the first different digital video content having the resolution being cut off when presented on the display; associating, by the by at least one processor of the receiving device, the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display; and storing, by the by at least one processor of the receiving device, for future retrieval by the receiving device, the video graphics adjustment setting and the association of the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display. 3. The method of claim 2 further comprising: before the detecting the resolution of the digital video content to be output from the receiving device to a display, outputting, by at least one processor of the receiving device, from the receiving device for presentation on the display, second different digital video content having a different resolution, wherein edges of digital video frames of the second different digital video content having the different resolution are cut off when presented on the display; receiving, by at least one processor of the receiving device, input from a user indicating a different video graphics adjustment setting; applying, by at least one processor of the receiving device, the different video graphics adjustment setting to the second different digital video content having the different resolution, wherein the different video graphics adjustment setting being applied to the second different digital video content having the different resolution avoids the digital video frames of the second different digital video content having the different resolution being cut off when presented on the display; associating, by the by at least one processor of the receiving device, the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display; and storing, by the by at least one processor of the receiving device, for future retrieval by the receiving device, the different video graphics adjustment setting and the association of the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display. 4. The method of claim 3 wherein the resolution is a high definition (HD) digital video resolution and the different resolution is a video resolution higher than HD resolution. 5. The method of claim 4 wherein the HD digital video resolution is at least 1920 pixels×1080 lines and the different resolution is at least 3840 pixels×2160 lines. 6. The method of claim 1 wherein the video graphics adjustment setting includes a resizing of the digital video content having the detected resolution. 7. The method of claim 6 wherein the resizing of the digital video content having the detected resolution includes downscaling of digital video frames of the digital video content. 8. The method of claim 1 wherein the video graphics adjustment setting is includes a repositioning of frames of the digital video content on the display. 9. A system for video graphics adjustment comprising: at least one processor; at least one memory in communication with the at least one processor, the at least one memory having computer-executable instructions stored thereon that, when executed, cause the at least one processor to: output for presentation on a display digital video content for which edges of digital video frames of the digital video content are not cut off when presented on the display; output, for presentation on the display, first different digital video content having a resolution, wherein edges of digital video frames of the first different digital video content having the resolution are cut off when presented on the display; receive input from a user indicating a video graphics adjustment setting; apply the video graphics adjustment setting to the first different digital video content having the resolution, wherein the video graphics adjustment setting being applied to the first different digital video content having the resolution avoids the digital video frames of the first different digital video content having the resolution being cut off when presented on the display; associate the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display; and store, for future retrieval by a receiving device, the video graphics adjustment setting and the association of the video graphics adjustment setting with the resolution of the first different digital video content that was output to the display. 10. The system of claim 9, wherein the computer-executable instructions, when executed, further cause the at least one processor to: output second different digital video content having a different resolution, wherein edges of digital video frames of the second different digital video content having the different resolution are cut off when presented on the display; receive input from a user indicating a different video graphics adjustment setting; apply the different video graphics adjustment setting to the second different digital video content having the different resolution, wherein the different video graphics adjustment setting being applied to the second different digital video content having the different resolution avoids the digital video frames of the second different digital video content having the different resolution being cut off when presented on the display; associate the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display; and store, for future retrieval by a receiving device, the different video graphics adjustment setting and the association of the different video graphics adjustment setting with the different resolution of the second different digital video content that was output to the display. 11. The system of claim 10, wherein the computer-executable instructions, when executed, further cause the at least one processor to: detect a resolution of third different digital video content to be output from the receiving device to the display; in response to the detecting the resolution of the third different digital video content to be output from the receiving device to the display, search for a previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content to be output from the receiving device to the display; retrieve the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content to be output from the receiving device to the display, the association with the detected resolution of the third different digital video content being that application of the video graphics adjustment setting associated with the detected resolution of the third different digital video content to digital video content having the detected resolution of the third different digital video content avoids edges of digital video frames of digital video content having the detected resolution of the third different digital video content being cut of when presented on the display; and in response to the retrieving the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content, apply the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content to the third different digital video content to be output from the receiving device before the third different digital video content to be output from the receiving device is presented on the display. 12. The system of claim 11, wherein the detected resolution of the third different digital video content is different than the resolution of the first different digital video content and the previously stored video graphics adjustment setting associated with the detected resolution of the third different digital video content is different than the previously stored video graphics adjustment setting associated with the resolution of the first different digital video content that was output to the display. 13. The system of claim 10 wherein the resolution of the first different digital video content is a high definition (HD) digital video resolution and the different resolution of the second different digital video content is a video resolution higher than HD resolution. 14. The system of claim 13 wherein the HD digital video resolution is at least 1920 pixels×1080 lines and the different resolution is at least 3840 pixels×2160 lines. 15. The system of claim 9 wherein the video graphics adjustment setting includes a resizing of the digital video content having the detected resolution. 16. The system of claim 15 wherein the resizing of the digital video content having the detected resolution includes downscaling of digital video frames of the digital video content. 17. The system of claim 9 wherein the video graphics adjustment setting is includes a repositioning of frames of the digital video content on the display. 18. A non-transitory computer-readable medium having computer-executable instructions stored thereon that, when executed, cause at least one processor to: detect a plurality of different resolutions of each different digital video content of a plurality of different digital video content to be output from a receiving device to a display; and for each detected different resolution of each different digital video content: search for a corresponding previously stored video graphics adjustment setting associated with the detected different resolution, the association with the detected different resolution being that application of the corresponding previously stored video graphics adjustment setting to digital video content having the detected different resolution avoids edges of digital video frames of digital video content having the detected different resolution being cut of when presented on the display; retrieve the corresponding previously stored video graphics adjustment setting; and in response to the retrieving the corresponding previously stored video graphics adjustment setting, apply the corresponding previously stored video graphics adjustment setting to the different digital video content to be output from the receiving device. 19. The non-transitory computer-readable medium of claim 18 wherein the corresponding previously stored video graphics adjustment setting includes a resizing of the different digital video content to be output from the receiving device. 20. The non-transitory computer-readable medium of claim 19 wherein the resizing of the different digital video content to be output from the receiving device includes downscaling of digital video frames of the different digital video content.
2,600
10,262
10,262
15,371,672
2,623
A display has a display panel and a correction circuit. The display panel has a plurality of pixel circuits, and each of the pixel circuits has a plurality of pixels and a shared circuit. Each of the pixels has an organic light-emitting diode (OLED) and a driving transistor for driving the OLED. The shared circuit is coupled to the plurality of pixels and is configured to compensate shifts in threshold voltages of the plurality of pixels according to a received reference voltage. The correction circuit is coupled to the plurality of pixels and is configured to sense a driving current of the pixels of each pixel circuit and to adjust the reference voltage received by the shared circuit of the each pixel circuit according to the detected driving current of the pixels of each pixel circuit.
1. A display apparatus, comprising: a display panel comprising a plurality of pixel circuits, each of the pixel circuits comprising: a plurality of pixels, wherein each of the pixels comprises an organic light-emitting diode (OLED) and a driving transistor for driving the OLED; and a first shared circuit, coupled to the pixels, for compensating a shift in a threshold voltage of the driving transistor of each pixel of each pixel circuit according to a reference voltage; and a correction circuit, coupled to the pixel circuits, for detecting a driving current of pixels of each pixel circuit and adjusting the reference voltage received by the first shared circuit of each pixel circuit according to the detected driving current of the pixels of the each pixel circuit. 2. A display apparatus of claim 1, wherein the correction circuit sequentially adjusts a voltage value of the reference voltage received by the first shared circuit of the each pixel circuit. 3. The display apparatus of claim 1, wherein the correction circuit comprises: a current mirror circuit for mirroring the detected driving current of the pixels of each pixel circuit to output a mirrored current; a conversion circuit for converting a variation of the mirrored current into a voltage variation; and a comparison circuit for comparing the voltage variation, a first predetermined comparison potential, and a second predetermined comparison potential, and adjusting the reference voltage according to comparison results, wherein the first predetermined comparison potential is not equal to the second predetermined comparison potential. 4. The display apparatus of claim 3, wherein the comparison circuit comprises: a first comparator, coupled to the conversion circuit, for comparing the voltage variation and the first predetermined comparison potential to output a first comparison signal; a second comparator, coupled to the conversion circuit, for comparing the voltage variation and the second predetermined comparison potential to output a second comparison signal; and a state machine, coupled to the first comparator and the second comparator, for adjusting the reference voltage according to the first comparison signal and the second comparison signal. 5. The display apparatus of claim 1, wherein the OLED has a first end and a second end, the second end of the OLED receives a first predetermined voltage and is coupled to a second end of the driving transistor, and each of the pixels further comprises: a first switch having a first end for receiving a data signal, a second end coupled to a first end of the driving transistor, and a control end for receiving a first control signal; a capacitor having a first end and a second end coupled to a control end of the driving transistor; a driving circuit for controlling electrical connection between the first end of the capacitor and the first end of the driving transistor according to a light emission control signal; a compensation circuit for controlling electrical connection between the second end of the capacitor and the first end of the OLED according to a second control signal; and a discharging circuit, coupled to the first end of the OLED and an initial voltage, controlling electrical connection between the first end of the OLED and the initial voltage according to a third control signal; wherein the first shared circuit of the each pixel circuit couples the first end of the capacitor to a second predetermined voltage or the reference voltage according to the second control signal and the light emission control signal. 6. The display apparatus of claim 5, wherein the first shared circuit comprises: a second switch having a first end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal; and a third switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 7. The display apparatus of claim 6, wherein the each pixel circuit further comprises a second shared circuit, wherein the pixels are provided between the first shared circuit and the second shared circuit, the second shared circuit comprising: a fourth switch having a first end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal; and a fifth switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 8. The display apparatus of claim 1, wherein the OLED has a first end and a second end, the second end of the OLED receives a first predetermined voltage and is coupled to a second end of the driving transistor, and each of the pixels further comprises: a first switch having a first end for receiving a data signal, a second end coupled to a first end of the driving transistor, and a control end for receiving a first control signal; a capacitor having a first end coupled to the first shared circuit and a second end coupled to a control end of the driving transistor; a driving circuit for controlling whether the first end of the capacitor and the first end of the driving transistor receive a second predetermined voltage according to a light emission control signal; a compensation circuit for controlling electrical connection between the second end of the capacitor and the first end of the OLED according to a second control signal; and a discharging circuit, coupled to the first end of the OLED and an initial voltage, controlling electrical connection between the first end of the OLED and the initial voltage according to a third control signal; wherein the first shared circuit of the each pixel circuit couples the first end of the capacitor to the reference voltage according to the second control signal. 9. The display apparatus of claim 5, wherein the reference voltage is not greater than a sum of a first number and a second number, the first number is a difference between a maximum voltage of the data signal and an absolute value of the threshold voltage of the driving transistor, the second number is a difference between the second predetermined voltage and a gate cut-off voltage of the driving transistor, and the initial voltage is not greater than a difference between a minimum voltage of the data signal and the absolute value of the threshold voltage of the driving transistor, and the initial voltage is less than a sum of the first predetermined voltage and a threshold voltage of the OLED. 10. The display apparatus of claim 5, wherein: during a first period of time, both the light emission control signal and the first control signal are at a first voltage, and both the second control signal and the third control signal are at a second voltage; during a second period of time, both the light emission control signal and the voltage of the third control signal are at the first voltage, both the first control signal and the second control signal are at the second voltage, wherein the second period of time is after the first period of time; and during a third period of time, the light emission control signal is at the second voltage, and the first control signal, the second control signal, and the third control signal are at the first voltage, wherein the third period of time is after the second period of time; and during a fourth period of time, both the light emission control signal and the third control signal are at the second voltage, and both the first control signal and the second control signal are at the first voltage, wherein the fourth period of time is after the third period of time. 11. The display apparatus of claim 10, wherein the correction circuit adjusts a potential of the reference voltage received by each pixel circuit during the fourth period of time according to a sum of currents of discharging circuits of the pixels during the fourth period of time. 12. The display apparatus of claim 8, wherein the first shared circuit comprises: a second switch having a first end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal. 13. The display apparatus of claim 12, wherein the each pixel circuit further comprises a second shared circuit, wherein the pixels are provided between the first shared circuit and the second shared circuit, the second shared circuit comprising: a fourth switch having a first, end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal. 14. A display apparatus, comprising: a display panel comprising a plurality of pixel circuits, each of the pixel circuits comprising: a plurality of pixels, each of the pixels comprising: an OLED; a driving transistor for driving the OLED; and a driving circuit for receiving a reference voltage and compensating a shift in a threshold voltage of the driving transistor; and a first shared circuit, coupled to the pixels, for transmitting a second predetermined voltage to the pixels according to a light emission control signal; and a correction circuit, coupled to the pixel circuits, for detecting a driving current of each pixel and adjusting the reference voltage received by the driving circuit of the each pixel according to the detected driving current of the each pixel. 15. The display apparatus of claim 14, wherein the OLED has a first end and a second end, the second end of the OLED receives a first predetermined voltage and is coupled to a second end of the driving transistor, and each of the pixels further comprises: a first switch having a first end for receiving a data signal, a second end coupled to a first end of the driving transistor, and a control end for receiving a first control signal; a capacitor having a first end coupled to the first shared circuit and a second end coupled to the control end of the driving transistor; a compensation circuit for controlling electrical connection between the second end of the capacitor and the first end of the OLED according to a second control signal; and a discharging circuit, coupled to the first end of the OLED and an initial voltage, controlling electrical connection between the first end of the OLED and the initial voltage according to a third control signal; wherein the driving circuit controls electrical connection between the first end of the capacitor and the first end of the driving transistor according to the light emission control signal and controls whether the first end of the capacitor receives the reference voltage according to the second control signal; and wherein the first shared circuit of each pixel circuit couples the first end of the capacitor to the second predetermined voltage according to the light emission control signal. 16. The display apparatus of claim 15, wherein the first shared circuit comprises: a third switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 17. The display apparatus of claim 16, wherein the each pixel circuit further comprises a second shared circuit, wherein the pixels are provided between the first shared circuit and the second shared circuit, the second shared circuit comprising: a fifth switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 18. A method for controlling the display apparatus of claim 14, comprising: during a first period of time, setting both the light emission control signal and the first control signal to a first voltage, setting both the second control signal and the third control signal to a second voltage; during a second period of time, setting both the light emission control signal and the third control signal to the first voltage, and setting both the first control signal and the second control signal to the second voltage, wherein the second period of time is after the first period of time; and during a third period of time, setting the light emission control signal to the second voltage, setting the first control signal, the second control signal, and the third control signal to the first voltage, wherein the third period of time is after the second period of time; and during a fourth period of time, setting both the light emission control signal and the third control signal to the second voltage, and setting both the first control signal and the second control signal to the first voltage, wherein the fourth period of time is after the third period of time. 19. The method of claim 18, further comprising: adjusting potential of the reference voltage received by the each pixel circuit during the fourth period of time according to a sum of currents of discharging circuits of all the pixels of the each pixel, circuit during the fourth period of time.
A display has a display panel and a correction circuit. The display panel has a plurality of pixel circuits, and each of the pixel circuits has a plurality of pixels and a shared circuit. Each of the pixels has an organic light-emitting diode (OLED) and a driving transistor for driving the OLED. The shared circuit is coupled to the plurality of pixels and is configured to compensate shifts in threshold voltages of the plurality of pixels according to a received reference voltage. The correction circuit is coupled to the plurality of pixels and is configured to sense a driving current of the pixels of each pixel circuit and to adjust the reference voltage received by the shared circuit of the each pixel circuit according to the detected driving current of the pixels of each pixel circuit.1. A display apparatus, comprising: a display panel comprising a plurality of pixel circuits, each of the pixel circuits comprising: a plurality of pixels, wherein each of the pixels comprises an organic light-emitting diode (OLED) and a driving transistor for driving the OLED; and a first shared circuit, coupled to the pixels, for compensating a shift in a threshold voltage of the driving transistor of each pixel of each pixel circuit according to a reference voltage; and a correction circuit, coupled to the pixel circuits, for detecting a driving current of pixels of each pixel circuit and adjusting the reference voltage received by the first shared circuit of each pixel circuit according to the detected driving current of the pixels of the each pixel circuit. 2. A display apparatus of claim 1, wherein the correction circuit sequentially adjusts a voltage value of the reference voltage received by the first shared circuit of the each pixel circuit. 3. The display apparatus of claim 1, wherein the correction circuit comprises: a current mirror circuit for mirroring the detected driving current of the pixels of each pixel circuit to output a mirrored current; a conversion circuit for converting a variation of the mirrored current into a voltage variation; and a comparison circuit for comparing the voltage variation, a first predetermined comparison potential, and a second predetermined comparison potential, and adjusting the reference voltage according to comparison results, wherein the first predetermined comparison potential is not equal to the second predetermined comparison potential. 4. The display apparatus of claim 3, wherein the comparison circuit comprises: a first comparator, coupled to the conversion circuit, for comparing the voltage variation and the first predetermined comparison potential to output a first comparison signal; a second comparator, coupled to the conversion circuit, for comparing the voltage variation and the second predetermined comparison potential to output a second comparison signal; and a state machine, coupled to the first comparator and the second comparator, for adjusting the reference voltage according to the first comparison signal and the second comparison signal. 5. The display apparatus of claim 1, wherein the OLED has a first end and a second end, the second end of the OLED receives a first predetermined voltage and is coupled to a second end of the driving transistor, and each of the pixels further comprises: a first switch having a first end for receiving a data signal, a second end coupled to a first end of the driving transistor, and a control end for receiving a first control signal; a capacitor having a first end and a second end coupled to a control end of the driving transistor; a driving circuit for controlling electrical connection between the first end of the capacitor and the first end of the driving transistor according to a light emission control signal; a compensation circuit for controlling electrical connection between the second end of the capacitor and the first end of the OLED according to a second control signal; and a discharging circuit, coupled to the first end of the OLED and an initial voltage, controlling electrical connection between the first end of the OLED and the initial voltage according to a third control signal; wherein the first shared circuit of the each pixel circuit couples the first end of the capacitor to a second predetermined voltage or the reference voltage according to the second control signal and the light emission control signal. 6. The display apparatus of claim 5, wherein the first shared circuit comprises: a second switch having a first end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal; and a third switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 7. The display apparatus of claim 6, wherein the each pixel circuit further comprises a second shared circuit, wherein the pixels are provided between the first shared circuit and the second shared circuit, the second shared circuit comprising: a fourth switch having a first end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal; and a fifth switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 8. The display apparatus of claim 1, wherein the OLED has a first end and a second end, the second end of the OLED receives a first predetermined voltage and is coupled to a second end of the driving transistor, and each of the pixels further comprises: a first switch having a first end for receiving a data signal, a second end coupled to a first end of the driving transistor, and a control end for receiving a first control signal; a capacitor having a first end coupled to the first shared circuit and a second end coupled to a control end of the driving transistor; a driving circuit for controlling whether the first end of the capacitor and the first end of the driving transistor receive a second predetermined voltage according to a light emission control signal; a compensation circuit for controlling electrical connection between the second end of the capacitor and the first end of the OLED according to a second control signal; and a discharging circuit, coupled to the first end of the OLED and an initial voltage, controlling electrical connection between the first end of the OLED and the initial voltage according to a third control signal; wherein the first shared circuit of the each pixel circuit couples the first end of the capacitor to the reference voltage according to the second control signal. 9. The display apparatus of claim 5, wherein the reference voltage is not greater than a sum of a first number and a second number, the first number is a difference between a maximum voltage of the data signal and an absolute value of the threshold voltage of the driving transistor, the second number is a difference between the second predetermined voltage and a gate cut-off voltage of the driving transistor, and the initial voltage is not greater than a difference between a minimum voltage of the data signal and the absolute value of the threshold voltage of the driving transistor, and the initial voltage is less than a sum of the first predetermined voltage and a threshold voltage of the OLED. 10. The display apparatus of claim 5, wherein: during a first period of time, both the light emission control signal and the first control signal are at a first voltage, and both the second control signal and the third control signal are at a second voltage; during a second period of time, both the light emission control signal and the voltage of the third control signal are at the first voltage, both the first control signal and the second control signal are at the second voltage, wherein the second period of time is after the first period of time; and during a third period of time, the light emission control signal is at the second voltage, and the first control signal, the second control signal, and the third control signal are at the first voltage, wherein the third period of time is after the second period of time; and during a fourth period of time, both the light emission control signal and the third control signal are at the second voltage, and both the first control signal and the second control signal are at the first voltage, wherein the fourth period of time is after the third period of time. 11. The display apparatus of claim 10, wherein the correction circuit adjusts a potential of the reference voltage received by each pixel circuit during the fourth period of time according to a sum of currents of discharging circuits of the pixels during the fourth period of time. 12. The display apparatus of claim 8, wherein the first shared circuit comprises: a second switch having a first end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal. 13. The display apparatus of claim 12, wherein the each pixel circuit further comprises a second shared circuit, wherein the pixels are provided between the first shared circuit and the second shared circuit, the second shared circuit comprising: a fourth switch having a first, end for receiving the reference voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the second control signal. 14. A display apparatus, comprising: a display panel comprising a plurality of pixel circuits, each of the pixel circuits comprising: a plurality of pixels, each of the pixels comprising: an OLED; a driving transistor for driving the OLED; and a driving circuit for receiving a reference voltage and compensating a shift in a threshold voltage of the driving transistor; and a first shared circuit, coupled to the pixels, for transmitting a second predetermined voltage to the pixels according to a light emission control signal; and a correction circuit, coupled to the pixel circuits, for detecting a driving current of each pixel and adjusting the reference voltage received by the driving circuit of the each pixel according to the detected driving current of the each pixel. 15. The display apparatus of claim 14, wherein the OLED has a first end and a second end, the second end of the OLED receives a first predetermined voltage and is coupled to a second end of the driving transistor, and each of the pixels further comprises: a first switch having a first end for receiving a data signal, a second end coupled to a first end of the driving transistor, and a control end for receiving a first control signal; a capacitor having a first end coupled to the first shared circuit and a second end coupled to the control end of the driving transistor; a compensation circuit for controlling electrical connection between the second end of the capacitor and the first end of the OLED according to a second control signal; and a discharging circuit, coupled to the first end of the OLED and an initial voltage, controlling electrical connection between the first end of the OLED and the initial voltage according to a third control signal; wherein the driving circuit controls electrical connection between the first end of the capacitor and the first end of the driving transistor according to the light emission control signal and controls whether the first end of the capacitor receives the reference voltage according to the second control signal; and wherein the first shared circuit of each pixel circuit couples the first end of the capacitor to the second predetermined voltage according to the light emission control signal. 16. The display apparatus of claim 15, wherein the first shared circuit comprises: a third switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 17. The display apparatus of claim 16, wherein the each pixel circuit further comprises a second shared circuit, wherein the pixels are provided between the first shared circuit and the second shared circuit, the second shared circuit comprising: a fifth switch having a first end for receiving the second predetermined voltage, a second end coupled to the first end of the capacitor, and a control end for receiving the light emission control signal. 18. A method for controlling the display apparatus of claim 14, comprising: during a first period of time, setting both the light emission control signal and the first control signal to a first voltage, setting both the second control signal and the third control signal to a second voltage; during a second period of time, setting both the light emission control signal and the third control signal to the first voltage, and setting both the first control signal and the second control signal to the second voltage, wherein the second period of time is after the first period of time; and during a third period of time, setting the light emission control signal to the second voltage, setting the first control signal, the second control signal, and the third control signal to the first voltage, wherein the third period of time is after the second period of time; and during a fourth period of time, setting both the light emission control signal and the third control signal to the second voltage, and setting both the first control signal and the second control signal to the first voltage, wherein the fourth period of time is after the third period of time. 19. The method of claim 18, further comprising: adjusting potential of the reference voltage received by the each pixel circuit during the fourth period of time according to a sum of currents of discharging circuits of all the pixels of the each pixel, circuit during the fourth period of time.
2,600
10,263
10,263
15,131,587
2,644
The operation of a mobile electronic device is controlled at least partially in accordance with operating characteristics adopted while the phone is at a first location. The operation of a mobile electronic device is controlled at least partially in accordance with a theme that how an electronic device responds to user input.
1. An apparatus, comprising: at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to: store at least one dynamic card wherein a card comprises content, provide a user interface that enables a user to view at least one of the cards, estimate the location of the apparatus, update the content of the at least one dynamic card dependent upon the location of the apparatus. 2. An apparatus as claimed in claim 1, further configured to update the dynamic card dependent upon the location of the apparatus. 3. An apparatus as claimed in claim 1, further configured wherein the user can request to receive content to update the at least one dynamic card. 4. An apparatus as claimed in claim 1, further configured to automatically receive the content related to the at least one dynamic card dependent upon the location of the apparatus. 5. An apparatus according to claim 4, wherein the dynamic card is updated based on user preferences stored within the memory of the apparatus. 6. An apparatus according to claim 1 wherein the dynamic card belongs to a subscription service. 7. An apparatus as claimed in claim 1, wherein the dynamic card is associated with a service provider and the apparatus is configured to receive information that is held by the service provider. 8. An apparatus as claimed in claim 1, further comprising a controller configured to refresh the dynamic card dependent upon the location of the apparatus. 9. An apparatus as claimed in claim 1, further configured to download dynamic cards. 10. An apparatus as claimed in claim 9, wherein the apparatus is configured to download dynamic cards in advance of arriving at a location. 11. An apparatus as claimed in claim 9, wherein the dynamic cards are provided by service providers. 12. An apparatus as claimed in claim 11, wherein the dynamic cards are received based on a user subscription or a user preference held within the memory of the apparatus. 13. A method for updating the content of a dynamic card in a electronic device comprising: storing at least one dynamic card wherein a card comprises content, providing a user interface that enables a user to view at least one of the cards, estimating the location of the electronic device, updating the content of the at least one dynamic card dependent upon the location of the electronic device. 14. A method as claimed in claim 13, further comprising a user request to update the content of the at least one dynamic card. 15. A method as claimed in claim 13, further comprising automatically receiving the content related to the at least one dynamic card dependent upon the location of the electronic device. 16. A method as claimed in claim 13, further comprising refreshing the content of the dynamic card dependent upon the location of the electronic device. 17. A method as claimed in claim 13, further comprising, downloading dynamic cards. 18. A method as claimed in claim 17, wherein the downloading of dynamic cards is in advance of arriving at a location. 19. A method as claimed in claim 13, wherein the dynamic cards are received based on a user subscription or a user preference held within the memory of the electronic device.
The operation of a mobile electronic device is controlled at least partially in accordance with operating characteristics adopted while the phone is at a first location. The operation of a mobile electronic device is controlled at least partially in accordance with a theme that how an electronic device responds to user input.1. An apparatus, comprising: at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to: store at least one dynamic card wherein a card comprises content, provide a user interface that enables a user to view at least one of the cards, estimate the location of the apparatus, update the content of the at least one dynamic card dependent upon the location of the apparatus. 2. An apparatus as claimed in claim 1, further configured to update the dynamic card dependent upon the location of the apparatus. 3. An apparatus as claimed in claim 1, further configured wherein the user can request to receive content to update the at least one dynamic card. 4. An apparatus as claimed in claim 1, further configured to automatically receive the content related to the at least one dynamic card dependent upon the location of the apparatus. 5. An apparatus according to claim 4, wherein the dynamic card is updated based on user preferences stored within the memory of the apparatus. 6. An apparatus according to claim 1 wherein the dynamic card belongs to a subscription service. 7. An apparatus as claimed in claim 1, wherein the dynamic card is associated with a service provider and the apparatus is configured to receive information that is held by the service provider. 8. An apparatus as claimed in claim 1, further comprising a controller configured to refresh the dynamic card dependent upon the location of the apparatus. 9. An apparatus as claimed in claim 1, further configured to download dynamic cards. 10. An apparatus as claimed in claim 9, wherein the apparatus is configured to download dynamic cards in advance of arriving at a location. 11. An apparatus as claimed in claim 9, wherein the dynamic cards are provided by service providers. 12. An apparatus as claimed in claim 11, wherein the dynamic cards are received based on a user subscription or a user preference held within the memory of the apparatus. 13. A method for updating the content of a dynamic card in a electronic device comprising: storing at least one dynamic card wherein a card comprises content, providing a user interface that enables a user to view at least one of the cards, estimating the location of the electronic device, updating the content of the at least one dynamic card dependent upon the location of the electronic device. 14. A method as claimed in claim 13, further comprising a user request to update the content of the at least one dynamic card. 15. A method as claimed in claim 13, further comprising automatically receiving the content related to the at least one dynamic card dependent upon the location of the electronic device. 16. A method as claimed in claim 13, further comprising refreshing the content of the dynamic card dependent upon the location of the electronic device. 17. A method as claimed in claim 13, further comprising, downloading dynamic cards. 18. A method as claimed in claim 17, wherein the downloading of dynamic cards is in advance of arriving at a location. 19. A method as claimed in claim 13, wherein the dynamic cards are received based on a user subscription or a user preference held within the memory of the electronic device.
2,600
10,264
10,264
14,913,717
2,645
Access control node ( 200 ), access device ( 202 A), tethering device ( 204 ), and methods therein, for enabling wireless access to a communications network ( 208 ). One or more access devices ( 202 ) having a wireless connection to the network ( 208 ) provide (2:1) relay properties to an access control node ( 200 ). When detecting (2:2) that network access is wanted for the tethering device ( 204 ), the access control node ( 200 ) selects (2:3) an access device ( 202 A) based on the obtained relay properties, to be used for sharing wireless connection with the tethering device ( 204 ). The access control node ( 200 ) then instructs (2:4) the selected access device ( 202 A) to be available as a relay to the communications network ( 208 ) for the tethering device ( 204 ) via a wireless link between the access device ( 202 A) and the tethering device ( 204 ). The tethering device ( 204 ) can then access (2:8) the communications network over the wireless link. By using the relay properties as a basis for selecting the access device ( 202 A), the performance of the wireless network access can be improved and unwanted battery consumption can be avoided. Furthermore, no manual actions are required to achieve the wireless network access.
1-32. (canceled) 33. A method performed by an access control node for enabling wireless access to a communications network, the method comprising: obtaining relay properties of one or more access devices having a wireless connection to the communications network; detecting that network access is wanted for a tethering device; selecting, based on the obtained relay properties, an access device of the one or more access devices to be used for sharing wireless connection to the communications network with the tethering device; and instructing the selected access device to be available as a relay to the communications network for the tethering device via a wireless link between the respective access device and the tethering device. 34. The method of claim 33, wherein the relay properties of the one or more access devices indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the respective access device as a relay. 35. The method of claim 34, wherein said preconditions are related to any of: time of day, battery temperature, present activities in the access device, and present power source used by the access device. 36. The method of claim 33, wherein the access control node detects that a wireless network connection is wanted by receiving an access request from the tethering device or by receiving a notification from the access device indicating that network access is wanted for the tethering device. 37. The method of claim 33, wherein the access device is selected further based on current location of the tethering device. 38. The method of claim 33, wherein the access device is selected further based on preferences defined for one or both of the tethering device and the access device. 39. An access control node arranged to enable wireless access to a communications network, the access control node comprising: a communication circuit configured to wirelessly transmit and receive messages; and a processing circuit that comprises a processor and a memory; wherein the processing circuit is configured to: obtain relay properties of one or more access devices having a wireless connection to the communications network; detect that network access is wanted for a tethering device; select, based on the obtained relay properties, an access device of the one or more access devices to be used for sharing wireless connection to the communications network with the tethering device; and instruct the selected access device to be available as a relay to the communications network for the tethering device via a wireless link between the respective access device and the tethering device. 40. The access control node of claim 39, wherein the relay properties of the one or more access devices indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the respective access device as a relay. 41. The access control node of claim 40, wherein said preconditions are related to any of: time of day, battery temperature, present activities in the access device, and present power source used by the access device. 42. The access control node of claim 39, wherein the processing circuit is configured to detect that a wireless network connection is wanted by receiving an access request from the tethering device or by receiving a notification from the access device indicating that network access is wanted for the tethering device. 43. The access control node of claim 39, wherein the processing circuit is configured to select the access device further based on current location of the tethering device. 44. The access control node of claim 39, wherein the processing circuit is configured to select the access device further based on preferences defined for one or both of the tethering device and the access device. 45. A method performed by an access device having a wireless connection to a communications network, for enabling wireless access to the communications network for a tethering device, the method comprising: providing relay properties of the access device to an access control node; detecting that network access is wanted for the tethering device; sending a notification to the access control node, the notification indicating that network access is wanted for the tethering device; and receiving an instruction from the access control node to be available for the tethering device as a relay for accessing the communications network via a wireless link between the access device and the tethering device. 46. The method of claim 45, wherein the provided relay properties indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the access device as a relay. 47. The method of claim 45, wherein the access device detects that a network connection is wanted by detecting an access point signal transmitted from the tethering device. 48. The method of claim 47, wherein the detected access point signal is a WiFi hotspot signal. 49. The method of claim 45, wherein the access device transmits an access point signal that the tethering device can detect for obtaining the network access via the access device. 50. The method of claim 49, wherein the transmitted access point signal is a WiFi hotspot signal. 51. An access device arranged to enable wireless access to a communications network for a tethering device when the access device has a wireless connection to the communications network, the access device comprising: a communication circuit configured to wirelessly transmit and receive messages; and a processing circuit that comprises a processor and a memory; wherein the processing circuit is configured to: provide relay properties of the access device to an access control node; detect that network access is wanted for the tethering device; send a notification to the access control node, via the communication circuit, the notification indicating that network access is wanted for the tethering device; and receive, via the communication circuit, an instruction from the access control node to be available for the tethering device as a relay for accessing the communications network via a wireless link between the access device and the tethering device. 52. The access device of claim 51, wherein the provided relay properties indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the access device as a relay. 53. The access device of claim 51, wherein the processing circuit is configured to detect that a network connection is wanted by detecting an access point signal transmitted from the tethering device. 54. The access device of claim 53, wherein the detected access point signal is a WiFi hotspot signal. 55. The access device of claim 51, wherein the processing circuit is configured to transmit an access point signal that the tethering device can detect for obtaining the network access via the access device. 56. The access device of claim 55, wherein the transmitted access point signal is a WiFi hotspot signal. 57. A method performed by a tethering device for obtaining a wireless connection to a communications network, the method comprising: transmitting an access point signal to indicate that a network connection is wanted; detecting an access point signal transmitted from an access device indicating that a network connection is available via the access device; and accessing the communications network over a wireless link between the access device and the tethering device. 58. The method of claim 57, wherein the transmitted access point signal is a WiFi hotspot signal. 59. The method of claim 57, wherein the detected access point signal transmitted from the access device is a WiFi hotspot signal. 60. A tethering device arranged to obtain a wireless connection to a communications network, the tethering device comprising: a communication circuit configured to wirelessly transmit and receive messages; and a processing circuit that comprises a processor and a memory; wherein the processing circuit is configured to: transmit an access point signal, via the communication circuit, to indicate that a network connection is wanted; detect an access point signal transmitted from an access device indicating that a network connection is available via the access device; and access the communications network over a wireless link between the access device and the tethering device. 61. The tethering device of claim 60, wherein the transmitted access point signal is a WiFi hotspot signal. 62. The tethering device of claim 60, wherein the detected access point signal transmitted from the access device is a WiFi hotspot signal. 63. A method for enabling wireless access to a communications network, the method comprising: providing relay properties to an access control node from one or more access devices having a wireless connection to the communications network; detecting, by the access control node or by the one or more access devices, that network access is wanted for a tethering device; selecting, by the access control node, an access device of the one or more access devices based on the obtained relay properties, to be used for sharing wireless connection to the communications network with the tethering device; instructing, by the access control node, the selected access device to be available as a relay to the communications network for the tethering device via a wireless link between the access device and the tethering device; and accessing, by the tethering device, the communications network over the wireless link.
Access control node ( 200 ), access device ( 202 A), tethering device ( 204 ), and methods therein, for enabling wireless access to a communications network ( 208 ). One or more access devices ( 202 ) having a wireless connection to the network ( 208 ) provide (2:1) relay properties to an access control node ( 200 ). When detecting (2:2) that network access is wanted for the tethering device ( 204 ), the access control node ( 200 ) selects (2:3) an access device ( 202 A) based on the obtained relay properties, to be used for sharing wireless connection with the tethering device ( 204 ). The access control node ( 200 ) then instructs (2:4) the selected access device ( 202 A) to be available as a relay to the communications network ( 208 ) for the tethering device ( 204 ) via a wireless link between the access device ( 202 A) and the tethering device ( 204 ). The tethering device ( 204 ) can then access (2:8) the communications network over the wireless link. By using the relay properties as a basis for selecting the access device ( 202 A), the performance of the wireless network access can be improved and unwanted battery consumption can be avoided. Furthermore, no manual actions are required to achieve the wireless network access.1-32. (canceled) 33. A method performed by an access control node for enabling wireless access to a communications network, the method comprising: obtaining relay properties of one or more access devices having a wireless connection to the communications network; detecting that network access is wanted for a tethering device; selecting, based on the obtained relay properties, an access device of the one or more access devices to be used for sharing wireless connection to the communications network with the tethering device; and instructing the selected access device to be available as a relay to the communications network for the tethering device via a wireless link between the respective access device and the tethering device. 34. The method of claim 33, wherein the relay properties of the one or more access devices indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the respective access device as a relay. 35. The method of claim 34, wherein said preconditions are related to any of: time of day, battery temperature, present activities in the access device, and present power source used by the access device. 36. The method of claim 33, wherein the access control node detects that a wireless network connection is wanted by receiving an access request from the tethering device or by receiving a notification from the access device indicating that network access is wanted for the tethering device. 37. The method of claim 33, wherein the access device is selected further based on current location of the tethering device. 38. The method of claim 33, wherein the access device is selected further based on preferences defined for one or both of the tethering device and the access device. 39. An access control node arranged to enable wireless access to a communications network, the access control node comprising: a communication circuit configured to wirelessly transmit and receive messages; and a processing circuit that comprises a processor and a memory; wherein the processing circuit is configured to: obtain relay properties of one or more access devices having a wireless connection to the communications network; detect that network access is wanted for a tethering device; select, based on the obtained relay properties, an access device of the one or more access devices to be used for sharing wireless connection to the communications network with the tethering device; and instruct the selected access device to be available as a relay to the communications network for the tethering device via a wireless link between the respective access device and the tethering device. 40. The access control node of claim 39, wherein the relay properties of the one or more access devices indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the respective access device as a relay. 41. The access control node of claim 40, wherein said preconditions are related to any of: time of day, battery temperature, present activities in the access device, and present power source used by the access device. 42. The access control node of claim 39, wherein the processing circuit is configured to detect that a wireless network connection is wanted by receiving an access request from the tethering device or by receiving a notification from the access device indicating that network access is wanted for the tethering device. 43. The access control node of claim 39, wherein the processing circuit is configured to select the access device further based on current location of the tethering device. 44. The access control node of claim 39, wherein the processing circuit is configured to select the access device further based on preferences defined for one or both of the tethering device and the access device. 45. A method performed by an access device having a wireless connection to a communications network, for enabling wireless access to the communications network for a tethering device, the method comprising: providing relay properties of the access device to an access control node; detecting that network access is wanted for the tethering device; sending a notification to the access control node, the notification indicating that network access is wanted for the tethering device; and receiving an instruction from the access control node to be available for the tethering device as a relay for accessing the communications network via a wireless link between the access device and the tethering device. 46. The method of claim 45, wherein the provided relay properties indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the access device as a relay. 47. The method of claim 45, wherein the access device detects that a network connection is wanted by detecting an access point signal transmitted from the tethering device. 48. The method of claim 47, wherein the detected access point signal is a WiFi hotspot signal. 49. The method of claim 45, wherein the access device transmits an access point signal that the tethering device can detect for obtaining the network access via the access device. 50. The method of claim 49, wherein the transmitted access point signal is a WiFi hotspot signal. 51. An access device arranged to enable wireless access to a communications network for a tethering device when the access device has a wireless connection to the communications network, the access device comprising: a communication circuit configured to wirelessly transmit and receive messages; and a processing circuit that comprises a processor and a memory; wherein the processing circuit is configured to: provide relay properties of the access device to an access control node; detect that network access is wanted for the tethering device; send a notification to the access control node, via the communication circuit, the notification indicating that network access is wanted for the tethering device; and receive, via the communication circuit, an instruction from the access control node to be available for the tethering device as a relay for accessing the communications network via a wireless link between the access device and the tethering device. 52. The access device of claim 51, wherein the provided relay properties indicate at least one of: preconditions for sharing wireless network connection; current location; current battery level; one or more communication protocols used in the wireless connection; average delay in the wireless connection; average data throughput in the wireless connection; connectivity drop rate in the wireless connection; current total time of sharing wireless network connection with other tethering device(s); and number of other tethering devices currently using the access device as a relay. 53. The access device of claim 51, wherein the processing circuit is configured to detect that a network connection is wanted by detecting an access point signal transmitted from the tethering device. 54. The access device of claim 53, wherein the detected access point signal is a WiFi hotspot signal. 55. The access device of claim 51, wherein the processing circuit is configured to transmit an access point signal that the tethering device can detect for obtaining the network access via the access device. 56. The access device of claim 55, wherein the transmitted access point signal is a WiFi hotspot signal. 57. A method performed by a tethering device for obtaining a wireless connection to a communications network, the method comprising: transmitting an access point signal to indicate that a network connection is wanted; detecting an access point signal transmitted from an access device indicating that a network connection is available via the access device; and accessing the communications network over a wireless link between the access device and the tethering device. 58. The method of claim 57, wherein the transmitted access point signal is a WiFi hotspot signal. 59. The method of claim 57, wherein the detected access point signal transmitted from the access device is a WiFi hotspot signal. 60. A tethering device arranged to obtain a wireless connection to a communications network, the tethering device comprising: a communication circuit configured to wirelessly transmit and receive messages; and a processing circuit that comprises a processor and a memory; wherein the processing circuit is configured to: transmit an access point signal, via the communication circuit, to indicate that a network connection is wanted; detect an access point signal transmitted from an access device indicating that a network connection is available via the access device; and access the communications network over a wireless link between the access device and the tethering device. 61. The tethering device of claim 60, wherein the transmitted access point signal is a WiFi hotspot signal. 62. The tethering device of claim 60, wherein the detected access point signal transmitted from the access device is a WiFi hotspot signal. 63. A method for enabling wireless access to a communications network, the method comprising: providing relay properties to an access control node from one or more access devices having a wireless connection to the communications network; detecting, by the access control node or by the one or more access devices, that network access is wanted for a tethering device; selecting, by the access control node, an access device of the one or more access devices based on the obtained relay properties, to be used for sharing wireless connection to the communications network with the tethering device; instructing, by the access control node, the selected access device to be available as a relay to the communications network for the tethering device via a wireless link between the access device and the tethering device; and accessing, by the tethering device, the communications network over the wireless link.
2,600
10,265
10,265
15,310,811
2,685
A well tool assembly residing outside of a well includes a power source component and a measurement component coupled to the power source component. A transmitter in a housing assembly of the well tool assembly is used to generate a transmission penetrating a wall of the housing assembly to an exterior of the well tool assembly.
1. A method, comprising: with a well tool assembly residing outside of a well, the well tool assembly comprising a power source component and a measurement component coupled to the power source component, transmitting, using a transmitter in an external housing assembly of the well tool assembly, a status communication penetrating a wall of the external housing assembly to an exterior of the well tool assembly, the status communication regarding a status of the measurement component. 2. The method of claim 1, where transmitting the status communication comprises transmitting a status communication after the measurement component is coupled to the power source component, the status communication regarding status of the power source to measurement component coupling. 3. The method of claim 2, comprising transmitting a second communication penetrating the wall of the external housing assembly to an exterior of the well tool assembly, the second communication regarding contents of a memory in the housing assembly. 4. The method of claim 1, where transmitting the status communication penetrating the wall of the external housing assembly comprises generating a transmission representing the communication using the measurement component. 5. The method of claim 4, where generating the transmission representing the communication using the measurement component comprises generating a transmission representing the communication using at least one of an electromagnetic transmitter of a fluid resistivity measurement device or an acoustic transmitter of a pipe or cement evaluation device. 6. The method of claim 1, where transmitting the status communication through the wall of the external housing assembly comprises transmitting an acoustic, electromagnetic, or thermal communication signal. 7. The method of claim 1, where transmitting the status communication through the wall of the external housing assembly comprises transmitting a tactile communication signal. 8. The method of claim 1, comprising: receiving the communication at a receiver outside of the well; and displaying information in a visual form based on the communication. 9. The method of claim 1, further comprising receiving at a receiver within the exterior housing assembly of the well tool a transmission from exterior of the well tool assembly having penetrated the wall of the housing assembly. 10. The method of claim 9, comprising generating the transmission from exterior of the well tool assembly using an external transmitter unit. 11. The method of claim 9, comprising generating the transmission from exterior of the well tool assembly by tapping on the wall of the housing assembly. 12. A well tool assembly, comprising: a power source; a measurement component coupled to the power source; a sealed housing assembly enclosing the power source and the measurement component; and an at-surface transmitter in the housing assembly that produces, while the well tool assembly is outside of a well, a transmission that penetrates a wall of the housing assembly to an exterior of the well tool assembly. 13. The well tool assembly of claim 12, comprising a receiver in the housing assembly, the receiver tuned to receive a transmission generated by tapping on the housing assembly. 14. The well tool assembly of claim 12, where the transmitter is coupled to the measurement component to use an aspect of the measurement component in generating the transmission. 15. The well tool assembly of claim 12, where the transmitter generates a transmission in the form of tapping on the sealed housing or a thermal signal. 16. The well tool assembly of claim 12, where well tool assembly is a wire conveyed well tool assembly. 17. A system, comprising: a well tool assembly comprising one or more measurement components in a housing assembly; and a transmitter in the housing assembly coupled to the measurement components, the transmitter being of a type that produces a transmission that penetrates a wall of the housing assembly. 18. The system of claim 17, comprising a receiver in the housing assembly tuned to receive a transmission from exterior the housing assembly that penetrates the housing assembly. 19. The system of claim 17, where the transmitter is coupled to a measurement component to use the component in generating the transmission.
A well tool assembly residing outside of a well includes a power source component and a measurement component coupled to the power source component. A transmitter in a housing assembly of the well tool assembly is used to generate a transmission penetrating a wall of the housing assembly to an exterior of the well tool assembly.1. A method, comprising: with a well tool assembly residing outside of a well, the well tool assembly comprising a power source component and a measurement component coupled to the power source component, transmitting, using a transmitter in an external housing assembly of the well tool assembly, a status communication penetrating a wall of the external housing assembly to an exterior of the well tool assembly, the status communication regarding a status of the measurement component. 2. The method of claim 1, where transmitting the status communication comprises transmitting a status communication after the measurement component is coupled to the power source component, the status communication regarding status of the power source to measurement component coupling. 3. The method of claim 2, comprising transmitting a second communication penetrating the wall of the external housing assembly to an exterior of the well tool assembly, the second communication regarding contents of a memory in the housing assembly. 4. The method of claim 1, where transmitting the status communication penetrating the wall of the external housing assembly comprises generating a transmission representing the communication using the measurement component. 5. The method of claim 4, where generating the transmission representing the communication using the measurement component comprises generating a transmission representing the communication using at least one of an electromagnetic transmitter of a fluid resistivity measurement device or an acoustic transmitter of a pipe or cement evaluation device. 6. The method of claim 1, where transmitting the status communication through the wall of the external housing assembly comprises transmitting an acoustic, electromagnetic, or thermal communication signal. 7. The method of claim 1, where transmitting the status communication through the wall of the external housing assembly comprises transmitting a tactile communication signal. 8. The method of claim 1, comprising: receiving the communication at a receiver outside of the well; and displaying information in a visual form based on the communication. 9. The method of claim 1, further comprising receiving at a receiver within the exterior housing assembly of the well tool a transmission from exterior of the well tool assembly having penetrated the wall of the housing assembly. 10. The method of claim 9, comprising generating the transmission from exterior of the well tool assembly using an external transmitter unit. 11. The method of claim 9, comprising generating the transmission from exterior of the well tool assembly by tapping on the wall of the housing assembly. 12. A well tool assembly, comprising: a power source; a measurement component coupled to the power source; a sealed housing assembly enclosing the power source and the measurement component; and an at-surface transmitter in the housing assembly that produces, while the well tool assembly is outside of a well, a transmission that penetrates a wall of the housing assembly to an exterior of the well tool assembly. 13. The well tool assembly of claim 12, comprising a receiver in the housing assembly, the receiver tuned to receive a transmission generated by tapping on the housing assembly. 14. The well tool assembly of claim 12, where the transmitter is coupled to the measurement component to use an aspect of the measurement component in generating the transmission. 15. The well tool assembly of claim 12, where the transmitter generates a transmission in the form of tapping on the sealed housing or a thermal signal. 16. The well tool assembly of claim 12, where well tool assembly is a wire conveyed well tool assembly. 17. A system, comprising: a well tool assembly comprising one or more measurement components in a housing assembly; and a transmitter in the housing assembly coupled to the measurement components, the transmitter being of a type that produces a transmission that penetrates a wall of the housing assembly. 18. The system of claim 17, comprising a receiver in the housing assembly tuned to receive a transmission from exterior the housing assembly that penetrates the housing assembly. 19. The system of claim 17, where the transmitter is coupled to a measurement component to use the component in generating the transmission.
2,600
10,266
10,266
14,336,938
2,612
Techniques are described for deriving information, including graphical representations, based on perspectives of a 3D scene by utilizing sensor model representations of location points in the 3D scene. A 2D view point representation of a location point is derived based on the sensor model representation. From this information, a data representation can be determined. The 2D view point representation can be used to determine a second 2D view point representation. Other techniques include using sensor model representations of location points associated with dynamic objects in a 3D scene. These sensor model representations are generated using sensor systems having perspectives external to the location points and are used to determine a 3D model associated with a dynamic object. Data or graphical representations may be determined based on the 3D model. A system for obtaining information based on perspectives of a 3D scene includes a data manager and a renderer.
1-10. (canceled) 11. A computer-implemented method for generating information based on perspectives of a three-dimensional (3D) scene comprising: receiving a first sensor model representation of a first location point associated with a first dynamic object in the 3D scene, wherein the first sensor model representation is generated by a first sensor system having a first perspective external to the first location point; receiving a second sensor model representation of a second location point associated with a second dynamic object in the 3D scene, wherein the second sensor model representation is generated by a second sensor system having a second perspective external to the second location point; determining, with a processor-based computing device, a 3D model associated with the second dynamic object based on the first sensor model representation; and determining, with a processor-based computing device, a data representation based on the 3D model associated with the second dynamic object. 12. The computer-implemented method of claim 11, wherein the first dynamic object and the second dynamic object are the same dynamic object. 13. The computer-implemented method of claim 11, wherein the determining includes automatically determining a data representation. 14. The computer-implemented method of claim 11, wherein determining a 3D model includes generating a first plane associated with the second dynamic object at a fixed angle relative to a second plane associated with the first location point. 15. The computer-implemented method of claim 14, wherein determining a data representation includes determining a graphical representation. 16. The computer-implemented method of claim 11, further comprising: enabling a user to select a portion of the 3D scene using a view port; and generating a graphical representation based on the 3D model associated with the second dynamic object and the selected portion of the 3D scene. 17. The computer-implemented method of claim 16, wherein the enabling includes enabling a user to select a portion of the 3D scene using at least one of pan, tilt and zoom (PTZ) controls. 18. The computer-implemented method of claim 16, wherein the enabling includes enabling a user to select a moving location point of at least one of the first dynamic object and the second dynamic object. 19. The computer-implemented method of claim 16, wherein the enabling includes enabling a user to select a perspective of the second dynamic object. 20-23. (canceled) 24. A system for generating information based on perspectives of a three-dimensional (3D) scene comprising: a data manager configured to receive a first sensor model representation of a first location point associated with a first dynamic object in the 3D scene, wherein the first sensor model representation is generated by a first sensor system having a first perspective external to the first location point; receive a second sensor model representation of a second location point associated with a second dynamic object in the 3D scene, wherein the second sensor model representation is generated by a second sensor system having a second perspective external to the second location point; and determine a 3D model associated with the second dynamic object based on the first sensor model representation; and a renderer configured to determine a graphical representation based on the 3D model associated with the second dynamic object. 25. The system of claim 24, wherein the first dynamic object and the second dynamic object are the same dynamic object. 26. The system of claim 24, wherein the first sensor system and the second sensor system are the same sensor system. 27. The system of claim 24, wherein at least one of the first sensor system and the second sensor system are imaging devices. 28. The system of claim 24, further comprising: an object tracker configured to determine the first sensor model representation and the second sensor model representation. 29. The system of claim 24, wherein the data manager is further configured to generate a first plane associated with the second dynamic object at a fixed angle relative to a second plane associated with the first location point. 30. The system of claim 24, further comprising: an operator control configured to enable a user to select a portion of the 3D scene using a view port, and wherein the renderer is configured to generate a graphical representation based on the 3D model associated with the second dynamic object and the selected portion of the 3D scene. 31. The system of claim 30, wherein the operator control is configured to enable a user to select a portion of the 3D scene using at least one of pan, tilt and zoom (PTZ) controls. 32. The system of claim 30, wherein the operator control is configured to enable a user to select a moving location point of at least one of the first dynamic object or the second dynamic object. 33. The system of claim 30, wherein the operator control is configured to enable a user to select a perspective of the second dynamic object. 34. The system of claim 30, wherein the renderer is configured to provide a graphical image during a display of the 3D scene based on the generated graphical representation. 35. A system for generating information based on perspectives of a three-dimensional (3D) scene comprising: a data manager configured to receive a first sensor model representation of a first location point associated with a first dynamic object in the 3D scene, wherein the first sensor model representation is generated by a first sensor system having a first perspective external to the first location point; and receive a second sensor model representation of a second location point associated with a second dynamic object in the 3D scene, wherein the second sensor model representation is generated by a second sensor system having a second perspective external to the second location point; determine a 3D model associated with the second dynamic object based on the first sensor model representation; and determine a data representation based on the 3D model associated with the second dynamic object. 36. The system of claim 35, further comprising: an object tracker configured to determine the first sensor model representation and the second sensor model representation. 37. The system of claim 35, wherein the data manager is further configured to generate a first plane associated with the second dynamic object at a fixed angle relative to a second plane associated with the first location point. 38. The system of claim 35, further comprising: a renderer further configured to generate a graphical representation based on the 3D model associated with the second dynamic object and the selected portion of the 3D scene.
Techniques are described for deriving information, including graphical representations, based on perspectives of a 3D scene by utilizing sensor model representations of location points in the 3D scene. A 2D view point representation of a location point is derived based on the sensor model representation. From this information, a data representation can be determined. The 2D view point representation can be used to determine a second 2D view point representation. Other techniques include using sensor model representations of location points associated with dynamic objects in a 3D scene. These sensor model representations are generated using sensor systems having perspectives external to the location points and are used to determine a 3D model associated with a dynamic object. Data or graphical representations may be determined based on the 3D model. A system for obtaining information based on perspectives of a 3D scene includes a data manager and a renderer.1-10. (canceled) 11. A computer-implemented method for generating information based on perspectives of a three-dimensional (3D) scene comprising: receiving a first sensor model representation of a first location point associated with a first dynamic object in the 3D scene, wherein the first sensor model representation is generated by a first sensor system having a first perspective external to the first location point; receiving a second sensor model representation of a second location point associated with a second dynamic object in the 3D scene, wherein the second sensor model representation is generated by a second sensor system having a second perspective external to the second location point; determining, with a processor-based computing device, a 3D model associated with the second dynamic object based on the first sensor model representation; and determining, with a processor-based computing device, a data representation based on the 3D model associated with the second dynamic object. 12. The computer-implemented method of claim 11, wherein the first dynamic object and the second dynamic object are the same dynamic object. 13. The computer-implemented method of claim 11, wherein the determining includes automatically determining a data representation. 14. The computer-implemented method of claim 11, wherein determining a 3D model includes generating a first plane associated with the second dynamic object at a fixed angle relative to a second plane associated with the first location point. 15. The computer-implemented method of claim 14, wherein determining a data representation includes determining a graphical representation. 16. The computer-implemented method of claim 11, further comprising: enabling a user to select a portion of the 3D scene using a view port; and generating a graphical representation based on the 3D model associated with the second dynamic object and the selected portion of the 3D scene. 17. The computer-implemented method of claim 16, wherein the enabling includes enabling a user to select a portion of the 3D scene using at least one of pan, tilt and zoom (PTZ) controls. 18. The computer-implemented method of claim 16, wherein the enabling includes enabling a user to select a moving location point of at least one of the first dynamic object and the second dynamic object. 19. The computer-implemented method of claim 16, wherein the enabling includes enabling a user to select a perspective of the second dynamic object. 20-23. (canceled) 24. A system for generating information based on perspectives of a three-dimensional (3D) scene comprising: a data manager configured to receive a first sensor model representation of a first location point associated with a first dynamic object in the 3D scene, wherein the first sensor model representation is generated by a first sensor system having a first perspective external to the first location point; receive a second sensor model representation of a second location point associated with a second dynamic object in the 3D scene, wherein the second sensor model representation is generated by a second sensor system having a second perspective external to the second location point; and determine a 3D model associated with the second dynamic object based on the first sensor model representation; and a renderer configured to determine a graphical representation based on the 3D model associated with the second dynamic object. 25. The system of claim 24, wherein the first dynamic object and the second dynamic object are the same dynamic object. 26. The system of claim 24, wherein the first sensor system and the second sensor system are the same sensor system. 27. The system of claim 24, wherein at least one of the first sensor system and the second sensor system are imaging devices. 28. The system of claim 24, further comprising: an object tracker configured to determine the first sensor model representation and the second sensor model representation. 29. The system of claim 24, wherein the data manager is further configured to generate a first plane associated with the second dynamic object at a fixed angle relative to a second plane associated with the first location point. 30. The system of claim 24, further comprising: an operator control configured to enable a user to select a portion of the 3D scene using a view port, and wherein the renderer is configured to generate a graphical representation based on the 3D model associated with the second dynamic object and the selected portion of the 3D scene. 31. The system of claim 30, wherein the operator control is configured to enable a user to select a portion of the 3D scene using at least one of pan, tilt and zoom (PTZ) controls. 32. The system of claim 30, wherein the operator control is configured to enable a user to select a moving location point of at least one of the first dynamic object or the second dynamic object. 33. The system of claim 30, wherein the operator control is configured to enable a user to select a perspective of the second dynamic object. 34. The system of claim 30, wherein the renderer is configured to provide a graphical image during a display of the 3D scene based on the generated graphical representation. 35. A system for generating information based on perspectives of a three-dimensional (3D) scene comprising: a data manager configured to receive a first sensor model representation of a first location point associated with a first dynamic object in the 3D scene, wherein the first sensor model representation is generated by a first sensor system having a first perspective external to the first location point; and receive a second sensor model representation of a second location point associated with a second dynamic object in the 3D scene, wherein the second sensor model representation is generated by a second sensor system having a second perspective external to the second location point; determine a 3D model associated with the second dynamic object based on the first sensor model representation; and determine a data representation based on the 3D model associated with the second dynamic object. 36. The system of claim 35, further comprising: an object tracker configured to determine the first sensor model representation and the second sensor model representation. 37. The system of claim 35, wherein the data manager is further configured to generate a first plane associated with the second dynamic object at a fixed angle relative to a second plane associated with the first location point. 38. The system of claim 35, further comprising: a renderer further configured to generate a graphical representation based on the 3D model associated with the second dynamic object and the selected portion of the 3D scene.
2,600
10,267
10,267
13,215,481
2,626
The present disclosure describes a method for conserving power on a portable electronic device and a portable electronic device configured for the same. In accordance with one embodiment, there is provided a method for conserving power comprising: switching a portable electronic device to a low power mode in response to a trigger condition; and switching the portable electronic device from the low power mode to a full power mode on the portable electronic device in response to detection of a designated wake-up gesture on a touch-sensitive overlay of the portable electronic device.
1. A method for conserving power on a portable electronic device, comprising: switching the portable electronic device to a low power mode in response to a trigger condition; detecting a touch on the touch-sensitive overlay; determining touch attributes of the touch; determining when the touch is the designated wake-up gesture based on the determined touch attributes; and switching the portable electronic device from the low power mode to a full power mode on the portable electronic device in response to detection of the designated wake-up gesture on the touch-sensitive overlay of the portable electronic device. 2. The method of claim 1, wherein the designated wake-up gesture is a pair of swipes each in a designated direction. 3. The method of claim 2, wherein the designated wake-up gesture is a pair of down swipes located towards opposite sides of the portable electronic device. 4. The method of claim 2, wherein the designated wake-up gesture is a pair of up swipes located towards opposite sides of the portable electronic device. 5. The method of claim 1, wherein the designated wake-up gesture is a meta-navigation gesture, wherein the meta-navigation gesture comprises a gesture with a start location outside of a display area of the touch-sensitive overlay and an end location within the display area of the touch-sensitive overlay. 6. The method of claim 5, wherein the determined touch attributes comprise a start location of the touch and one or more of a distance travelled of the touch, a speed of the touch when the touch is detected, a direction of the touch when the touch is detected or an end location of the touch, wherein the touch is determined to be a meta-navigation gesture based on the start location and the one or more of the speed when the touch is detected, the direction of the touch when the touch is detected or the end location of the touch. 7. The method of claim 6, wherein the touch is determined to be a meta-navigation gesture when the start location of the touch is outside of a display area of the touch-sensitive display and the touch travels to the display area of the touch-sensitive display. 8. The method of claim 6, wherein the touch is determined to be a meta-navigation gesture when the start location of the touch is outside of a display area of the touch-sensitive display and an outside of a buffer region adjacent the display area and the touch travels through the buffer region to the display area of the touch-sensitive display. 9. The method of claim 8, wherein the touch is not a meta-navigation gesture when the start location is in the buffer region. 10. The method of claim 5, wherein detecting a touch comprises detecting multiple touches that overlap in time on the touch-sensitive display and determining touch attributes for each touch, wherein determining when the touch is the designated wake-up gesture comprises determining that the multiple touches comprise a meta-navigation gesture when at least one of the touches is a meta-navigation gesture. 11. The method of claim 1, wherein the designated wake-up gesture is performed in a designated area of the touch-sensitive overlay. 12. The method of claim 11, wherein the designated area is outside of a display area of the touch-sensitive overlay. 13. The method of claim 11, wherein the designated area is a buffer region between a display area and a non-display area of the touch-sensitive overlay. 14. The method of claim 11, wherein the designated area is a non-display area outside of a buffer region adjacent to a display area of the touch-sensitive overlay. 15. The method of claim 11, wherein only the designated area of the touch-sensitive overlay is scanned to detect the designated wake-up gesture. 16. The method of claim 1, wherein inputs other than the designated wake-up gesture are ignored when the portable electronic device is in the low power mode. 17. An electronic device comprising: a display; a touch-sensitive overlay which overlays a portion of the display; a processor coupled to the touch-sensitive overlay, wherein the processor is configured for switching to a low power mode in response to a trigger condition; detecting a touch on the touch-sensitive overlay; determining touch attributes of the touch; determining when the touch is the designated wake-up gesture based on the determined touch attributes; and switching from the low power mode to a full power mode on the portable electronic device in response to detection of the designated wake-up gesture on the touch-sensitive overlay. 18. An electronic device comprising: a display; a touch-sensitive overlay which overlays at least a portion of the display; a touch-sensitive bezel adjacent the touch-sensitive display; a processor coupled to the touch-sensitive overlay and touch-sensitive bezel, wherein the processor is configured for switching to a low power mode in response to a trigger condition; detecting a touch on the touch-sensitive overlay; determining touch attributes of the touch; determining when the touch is the designated wake-up gesture based on the determined touch attributes; and switching from the low power mode to a full power mode on the portable electronic device in response to detection of the designated wake-up gesture on the touch-sensitive overlay. 19. The electronic device of claim 18, wherein the designated wake-up gesture is a meta-navigation gesture which comprises a gesture with a start location on the touch-sensitive bezel. 20. The electronic device of claim 19, wherein the meta-navigation gesture comprises a gesture with a start location on the touch-sensitive bezel and the touch travels across the touch-sensitive bezel to a display area of the touch-sensitive display.
The present disclosure describes a method for conserving power on a portable electronic device and a portable electronic device configured for the same. In accordance with one embodiment, there is provided a method for conserving power comprising: switching a portable electronic device to a low power mode in response to a trigger condition; and switching the portable electronic device from the low power mode to a full power mode on the portable electronic device in response to detection of a designated wake-up gesture on a touch-sensitive overlay of the portable electronic device.1. A method for conserving power on a portable electronic device, comprising: switching the portable electronic device to a low power mode in response to a trigger condition; detecting a touch on the touch-sensitive overlay; determining touch attributes of the touch; determining when the touch is the designated wake-up gesture based on the determined touch attributes; and switching the portable electronic device from the low power mode to a full power mode on the portable electronic device in response to detection of the designated wake-up gesture on the touch-sensitive overlay of the portable electronic device. 2. The method of claim 1, wherein the designated wake-up gesture is a pair of swipes each in a designated direction. 3. The method of claim 2, wherein the designated wake-up gesture is a pair of down swipes located towards opposite sides of the portable electronic device. 4. The method of claim 2, wherein the designated wake-up gesture is a pair of up swipes located towards opposite sides of the portable electronic device. 5. The method of claim 1, wherein the designated wake-up gesture is a meta-navigation gesture, wherein the meta-navigation gesture comprises a gesture with a start location outside of a display area of the touch-sensitive overlay and an end location within the display area of the touch-sensitive overlay. 6. The method of claim 5, wherein the determined touch attributes comprise a start location of the touch and one or more of a distance travelled of the touch, a speed of the touch when the touch is detected, a direction of the touch when the touch is detected or an end location of the touch, wherein the touch is determined to be a meta-navigation gesture based on the start location and the one or more of the speed when the touch is detected, the direction of the touch when the touch is detected or the end location of the touch. 7. The method of claim 6, wherein the touch is determined to be a meta-navigation gesture when the start location of the touch is outside of a display area of the touch-sensitive display and the touch travels to the display area of the touch-sensitive display. 8. The method of claim 6, wherein the touch is determined to be a meta-navigation gesture when the start location of the touch is outside of a display area of the touch-sensitive display and an outside of a buffer region adjacent the display area and the touch travels through the buffer region to the display area of the touch-sensitive display. 9. The method of claim 8, wherein the touch is not a meta-navigation gesture when the start location is in the buffer region. 10. The method of claim 5, wherein detecting a touch comprises detecting multiple touches that overlap in time on the touch-sensitive display and determining touch attributes for each touch, wherein determining when the touch is the designated wake-up gesture comprises determining that the multiple touches comprise a meta-navigation gesture when at least one of the touches is a meta-navigation gesture. 11. The method of claim 1, wherein the designated wake-up gesture is performed in a designated area of the touch-sensitive overlay. 12. The method of claim 11, wherein the designated area is outside of a display area of the touch-sensitive overlay. 13. The method of claim 11, wherein the designated area is a buffer region between a display area and a non-display area of the touch-sensitive overlay. 14. The method of claim 11, wherein the designated area is a non-display area outside of a buffer region adjacent to a display area of the touch-sensitive overlay. 15. The method of claim 11, wherein only the designated area of the touch-sensitive overlay is scanned to detect the designated wake-up gesture. 16. The method of claim 1, wherein inputs other than the designated wake-up gesture are ignored when the portable electronic device is in the low power mode. 17. An electronic device comprising: a display; a touch-sensitive overlay which overlays a portion of the display; a processor coupled to the touch-sensitive overlay, wherein the processor is configured for switching to a low power mode in response to a trigger condition; detecting a touch on the touch-sensitive overlay; determining touch attributes of the touch; determining when the touch is the designated wake-up gesture based on the determined touch attributes; and switching from the low power mode to a full power mode on the portable electronic device in response to detection of the designated wake-up gesture on the touch-sensitive overlay. 18. An electronic device comprising: a display; a touch-sensitive overlay which overlays at least a portion of the display; a touch-sensitive bezel adjacent the touch-sensitive display; a processor coupled to the touch-sensitive overlay and touch-sensitive bezel, wherein the processor is configured for switching to a low power mode in response to a trigger condition; detecting a touch on the touch-sensitive overlay; determining touch attributes of the touch; determining when the touch is the designated wake-up gesture based on the determined touch attributes; and switching from the low power mode to a full power mode on the portable electronic device in response to detection of the designated wake-up gesture on the touch-sensitive overlay. 19. The electronic device of claim 18, wherein the designated wake-up gesture is a meta-navigation gesture which comprises a gesture with a start location on the touch-sensitive bezel. 20. The electronic device of claim 19, wherein the meta-navigation gesture comprises a gesture with a start location on the touch-sensitive bezel and the touch travels across the touch-sensitive bezel to a display area of the touch-sensitive display.
2,600
10,268
10,268
15,496,972
2,616
Various virtual reality computing systems and methods are disclosed. In one aspect, a method of delivering video frame data to multiple VR displays is provided. The method includes generating content for multiple VR displays and sensing for competing needs for resources with real time requirements of the multiple VR displays. If competing needs for resources with real time requirements are sensed, a selected refresh offset for refreshes of the multiple VR displays is determined to avoid conflict between the competing needs for resources of the multiple VR displays. The selected refresh offset is imposed and the content is delivered to the multiple VR displays.
1. A method of delivering video frame data to multiple VR displays, comprising: generating content for multiple VR displays; sensing for competing needs for resources with real time requirements of the multiple VR displays; if competing needs for resources with real time requirements are sensed, determining a selected refresh offset for refreshes of the multiple VR displays to avoid conflict between the competing needs for resources of the multiple VR displays; imposing the selected refresh offset; and delivering the content to the multiple VR displays. 2. The method of claim 1, wherein the resources comprise computation for rendering and asynchronous time warp requests. 3. The method of claim 1, wherein the multiple displays support dynamic refresh, the method comprising if competing needs for resources with real time requirements are sensed, also determining a selected dynamic refresh rate for refreshes of the multiple VR displays to aid in avoiding the competing needs for resources made by the multiple VR, and imposing the selected refresh offset and dynamic refresh rate. 4. The method of claim 3, wherein the resources comprise computation for rendering and asynchronous time warp requests. 5. The method of claim 1, wherein the generating the content is performed by a single GPU. 6. The method of claim 1, wherein the generating the content for multiple VR displays comprises generating the content for one of the multiple VR displays using a GPU and generating or delivering the content for another of the multiple VR displays using another GPU. 7. The method of claim 6, wherein the GPU is configured as a master and the another GPU is configured as a slave such that the master controls the selected refresh offset of frames generated or delivered by the slave GPU. 8. A method of delivering video frame data to multiple VR displays, comprising: running a first application on a computing device to generate content for multiple VR displays; sensing for competing needs for resources with real time requirements using a second application; if competing needs for resources with real time requirements are sensed, using the second application to determine a selected refresh offset for refreshes of the multiple VR displays to avoid conflict between the competing needs for resources of the multiple VR displays; imposing the selected refresh offset; and delivering the content to the multiple VR displays. 9. The method of claim 8, wherein the resources comprise computation for rendering and asynchronous time warp requests. 10. The method of claim 8, wherein the multiple displays support dynamic refresh, the method comprising if movements are sensed, also determining a selected dynamic refresh rate for refreshes of the multiple VR displays to aid in avoiding the competing requests for resources made by the multiple VR displays due to the movements, and imposing the selected refresh offset and dynamic refresh rate. 11. The method of claim 10, wherein the resources comprises computation for rendering and asynchronous time warp requests. 12. The method of claim 1, wherein the application is run by a single GPU. 13. The method of claim 8, wherein the generating the content for multiple VR displays comprises generating the content for one of the multiple VR displays using a GPU and generating or delivering the content for another of the multiple VR displays using another GPU. 14. The method of claim 13, wherein the GPU is configured as a master and the another GPU is configured as a slave such that the master controls the selected refresh offset of frames generated or delivered by the slave GPU. 15. A virtual reality computing system, comprising: a computing device; a processor operable to perform instructions to generate content for multiple VR displays, to sense for competing needs for resources with real time requirements of the multiple VR displays, if competing needs for resources with real time requirements are sensed, to determine a selected refresh offset for refreshes of the multiple VR displays to avoid conflict between the competing requests for resources of the multiple VR displays, to impose the selected refresh offset, and to deliver the content to the multiple VR displays. 16. The virtual reality computing system of claim 15, comprising the multiple VR displays. 17. The virtual reality computing system of claim 15, wherein the processor comprises a CPU, a GPU or a combined CPU and GPU. 18. The virtual reality computing system of claim 15, wherein the computing device comprises another processor wherein the processor generates the content for one of the multiple VR displays and the another processor generates or delivers the content for another of the multiple VR displays. 19. The virtual reality computing system of claim 18, wherein the processor is configured as a master and the another processor is configured as a slave such that the master controls the selected refresh offset of frames generated or delivered by the slave processor. 20. The virtual reality computing system of claim 15, wherein the multiple displays support dynamic refresh, the processor being operable to, if competing needs for resources with real time requirements are sensed, also determine a selected dynamic refresh rate for refreshes of the multiple VR displays to aid in avoiding conflict between the competing needs for resources of the multiple VR displays, and to impose the selected refresh offset and dynamic refresh rate.
Various virtual reality computing systems and methods are disclosed. In one aspect, a method of delivering video frame data to multiple VR displays is provided. The method includes generating content for multiple VR displays and sensing for competing needs for resources with real time requirements of the multiple VR displays. If competing needs for resources with real time requirements are sensed, a selected refresh offset for refreshes of the multiple VR displays is determined to avoid conflict between the competing needs for resources of the multiple VR displays. The selected refresh offset is imposed and the content is delivered to the multiple VR displays.1. A method of delivering video frame data to multiple VR displays, comprising: generating content for multiple VR displays; sensing for competing needs for resources with real time requirements of the multiple VR displays; if competing needs for resources with real time requirements are sensed, determining a selected refresh offset for refreshes of the multiple VR displays to avoid conflict between the competing needs for resources of the multiple VR displays; imposing the selected refresh offset; and delivering the content to the multiple VR displays. 2. The method of claim 1, wherein the resources comprise computation for rendering and asynchronous time warp requests. 3. The method of claim 1, wherein the multiple displays support dynamic refresh, the method comprising if competing needs for resources with real time requirements are sensed, also determining a selected dynamic refresh rate for refreshes of the multiple VR displays to aid in avoiding the competing needs for resources made by the multiple VR, and imposing the selected refresh offset and dynamic refresh rate. 4. The method of claim 3, wherein the resources comprise computation for rendering and asynchronous time warp requests. 5. The method of claim 1, wherein the generating the content is performed by a single GPU. 6. The method of claim 1, wherein the generating the content for multiple VR displays comprises generating the content for one of the multiple VR displays using a GPU and generating or delivering the content for another of the multiple VR displays using another GPU. 7. The method of claim 6, wherein the GPU is configured as a master and the another GPU is configured as a slave such that the master controls the selected refresh offset of frames generated or delivered by the slave GPU. 8. A method of delivering video frame data to multiple VR displays, comprising: running a first application on a computing device to generate content for multiple VR displays; sensing for competing needs for resources with real time requirements using a second application; if competing needs for resources with real time requirements are sensed, using the second application to determine a selected refresh offset for refreshes of the multiple VR displays to avoid conflict between the competing needs for resources of the multiple VR displays; imposing the selected refresh offset; and delivering the content to the multiple VR displays. 9. The method of claim 8, wherein the resources comprise computation for rendering and asynchronous time warp requests. 10. The method of claim 8, wherein the multiple displays support dynamic refresh, the method comprising if movements are sensed, also determining a selected dynamic refresh rate for refreshes of the multiple VR displays to aid in avoiding the competing requests for resources made by the multiple VR displays due to the movements, and imposing the selected refresh offset and dynamic refresh rate. 11. The method of claim 10, wherein the resources comprises computation for rendering and asynchronous time warp requests. 12. The method of claim 1, wherein the application is run by a single GPU. 13. The method of claim 8, wherein the generating the content for multiple VR displays comprises generating the content for one of the multiple VR displays using a GPU and generating or delivering the content for another of the multiple VR displays using another GPU. 14. The method of claim 13, wherein the GPU is configured as a master and the another GPU is configured as a slave such that the master controls the selected refresh offset of frames generated or delivered by the slave GPU. 15. A virtual reality computing system, comprising: a computing device; a processor operable to perform instructions to generate content for multiple VR displays, to sense for competing needs for resources with real time requirements of the multiple VR displays, if competing needs for resources with real time requirements are sensed, to determine a selected refresh offset for refreshes of the multiple VR displays to avoid conflict between the competing requests for resources of the multiple VR displays, to impose the selected refresh offset, and to deliver the content to the multiple VR displays. 16. The virtual reality computing system of claim 15, comprising the multiple VR displays. 17. The virtual reality computing system of claim 15, wherein the processor comprises a CPU, a GPU or a combined CPU and GPU. 18. The virtual reality computing system of claim 15, wherein the computing device comprises another processor wherein the processor generates the content for one of the multiple VR displays and the another processor generates or delivers the content for another of the multiple VR displays. 19. The virtual reality computing system of claim 18, wherein the processor is configured as a master and the another processor is configured as a slave such that the master controls the selected refresh offset of frames generated or delivered by the slave processor. 20. The virtual reality computing system of claim 15, wherein the multiple displays support dynamic refresh, the processor being operable to, if competing needs for resources with real time requirements are sensed, also determine a selected dynamic refresh rate for refreshes of the multiple VR displays to aid in avoiding conflict between the competing needs for resources of the multiple VR displays, and to impose the selected refresh offset and dynamic refresh rate.
2,600
10,269
10,269
15,096,711
2,647
A secure messaging system utilizing a limited viewing window is disclosed. The secure messaging system may prevent an entire message or image from being displayed at a single time. The secure messaging system may display the message on a first virtual layer. The secure messaging system may display a virtual smokescreen layer that conceals the first virtual layer from display. The secure messaging system may enable a visibility window allowing a small portion of the message or image to be displayed through the virtual smokescreen layer. The secure messaging system may delete the message in response to a detected attempted screenshot of the message or image.
1. A method for improving the functioning of a mobile device, the method comprising: receiving, at the mobile device configured with a touch-sensitive display screen, an electronic message comprising at least one of text, image, or video; displaying, responsive to operation of a mobile application on the mobile device, a first portion of the content of the electronic message within a visibility window on the mobile device display screen; darkening, responsive to operation of the mobile application on the mobile device, the mobile device display screen outside the visibility window to reduce the amount of electricity drawn from the battery of the mobile device due to operation of the mobile device display screen; and relocating, responsive to a touch input received at the display screen, the location of the visibility window to conceal the first portion of the electronic message and to display a second portion of the content of the electronic message. 2. A method of displaying message content on a mobile device screen in a manner resistant to copying via screenshot, the method comprising: receiving, at a first mobile device, a message originating at a second mobile device; displaying, on a first mobile device screen, only a portion of the message content in a visibility window, the visibility window comprising only a portion of the area of the first mobile device screen; concealing, on all locations of the mobile device screen outside the visibility window, the message content; and moving the visibility window to display a second portion of the message content. 3. The method of claim 2, wherein the first portion is displayed responsive to a user maintaining a continuous touch on the mobile device screen near the location on the mobile device screen of the first portion of the contents of the message content. 4. The method of claim 3, wherein the visibility window is moved in response to the user moving the continuous touch to a new location on the mobile device screen. 5. The method of claim 3, wherein the visibility window is rapidly moved to display the second portion of the message content, such that the entire message content may be discernible by the user. 6. The method of claim 2, further comprising deleting the message content in response to a screenshot of the message. 7. The method of claim 2, wherein the message content comprises at least one of text, image, or video. 8. The method of claim 2, wherein concealing the message content comprises displaying a virtual smokescreen layer on the mobile device screen. 9. A secure messaging system, comprising: a software application operative on a mobile device, wherein the software application is configured to send and receive messages via a network connection of the mobile device, wherein the software application is configured to display only a first portion of the contents of a message on a screen of the mobile device, and wherein the software application is configured to conceal the remaining portion of the contents of a message on the screen of the mobile device. 10. The system of claim 9, wherein the first portion is displayed responsive to a user of the software application touching the screen of the mobile device near the location on the screen of the first portion of the contents of the message. 11. The system of claim 10, wherein the first portion that is displayed is updated responsive to the user touching a new location on the screen of the mobile device. 12. The system of claim 10, wherein the software application is configured to rapidly update the location of the first portion on the screen of the mobile device. 13. The system of claim 9, wherein the software application is configured to delete the message in response to a screenshot of the message. 14. The system of claim 9, wherein the message comprises at least one of text, image, or video. 15. The system of claim 9, wherein the software application is configured to conceal the contents of the message through a virtual smokescreen layer displayed on the screen of the mobile device.
A secure messaging system utilizing a limited viewing window is disclosed. The secure messaging system may prevent an entire message or image from being displayed at a single time. The secure messaging system may display the message on a first virtual layer. The secure messaging system may display a virtual smokescreen layer that conceals the first virtual layer from display. The secure messaging system may enable a visibility window allowing a small portion of the message or image to be displayed through the virtual smokescreen layer. The secure messaging system may delete the message in response to a detected attempted screenshot of the message or image.1. A method for improving the functioning of a mobile device, the method comprising: receiving, at the mobile device configured with a touch-sensitive display screen, an electronic message comprising at least one of text, image, or video; displaying, responsive to operation of a mobile application on the mobile device, a first portion of the content of the electronic message within a visibility window on the mobile device display screen; darkening, responsive to operation of the mobile application on the mobile device, the mobile device display screen outside the visibility window to reduce the amount of electricity drawn from the battery of the mobile device due to operation of the mobile device display screen; and relocating, responsive to a touch input received at the display screen, the location of the visibility window to conceal the first portion of the electronic message and to display a second portion of the content of the electronic message. 2. A method of displaying message content on a mobile device screen in a manner resistant to copying via screenshot, the method comprising: receiving, at a first mobile device, a message originating at a second mobile device; displaying, on a first mobile device screen, only a portion of the message content in a visibility window, the visibility window comprising only a portion of the area of the first mobile device screen; concealing, on all locations of the mobile device screen outside the visibility window, the message content; and moving the visibility window to display a second portion of the message content. 3. The method of claim 2, wherein the first portion is displayed responsive to a user maintaining a continuous touch on the mobile device screen near the location on the mobile device screen of the first portion of the contents of the message content. 4. The method of claim 3, wherein the visibility window is moved in response to the user moving the continuous touch to a new location on the mobile device screen. 5. The method of claim 3, wherein the visibility window is rapidly moved to display the second portion of the message content, such that the entire message content may be discernible by the user. 6. The method of claim 2, further comprising deleting the message content in response to a screenshot of the message. 7. The method of claim 2, wherein the message content comprises at least one of text, image, or video. 8. The method of claim 2, wherein concealing the message content comprises displaying a virtual smokescreen layer on the mobile device screen. 9. A secure messaging system, comprising: a software application operative on a mobile device, wherein the software application is configured to send and receive messages via a network connection of the mobile device, wherein the software application is configured to display only a first portion of the contents of a message on a screen of the mobile device, and wherein the software application is configured to conceal the remaining portion of the contents of a message on the screen of the mobile device. 10. The system of claim 9, wherein the first portion is displayed responsive to a user of the software application touching the screen of the mobile device near the location on the screen of the first portion of the contents of the message. 11. The system of claim 10, wherein the first portion that is displayed is updated responsive to the user touching a new location on the screen of the mobile device. 12. The system of claim 10, wherein the software application is configured to rapidly update the location of the first portion on the screen of the mobile device. 13. The system of claim 9, wherein the software application is configured to delete the message in response to a screenshot of the message. 14. The system of claim 9, wherein the message comprises at least one of text, image, or video. 15. The system of claim 9, wherein the software application is configured to conceal the contents of the message through a virtual smokescreen layer displayed on the screen of the mobile device.
2,600
10,270
10,270
15,117,975
2,693
A cordless wireless and mobile display system having an integrated system for delivering uninterrupted power to a display is described. The cordless wireless and mobile display includes wireless capabilities that allows it to receive and transmit information from associated medical devices and to send data to other designated devices. A battery assembly includes a primary and secondary battery that supplies uninterrupted power to a display. Sensors periodically sample the power output from the battery in current use to ensure that enough power is available to the display. When sensors detect power output levels have fallen below a preset value, indicators alert users that the system is switching to a backup power supply and that the primary battery supply needs recharging.
1. A cordless wireless mobile display system for use in a surgical procedure room comprising: a display; a wireless receiver and a wireless transmitter in communication with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; a battery controller; a battery charging station; and a mobile stand supporting the display, the receiver, the transmitter, the battery assembly, and the battery charging station. 2. The display system of claim 1, wherein the wireless receiver and the wireless transmitter is able to transmit and receive information in real time. 3. The display system of claim 1, wherein the battery controller is able to monitor the power output from the primary battery and the secondary battery. 4. The display system of claim 1, wherein the battery controller is able to switch power from the primary battery to the secondary battery when the primary battery's output falls below the predetermined output value. 5. The display system of claim 4, wherein the battery controller is able to switch power from the primary battery to the secondary battery within ten milliseconds once the primary battery power output falls below the predetermined output value. 6. The display system of claim 1, wherein the battery controller is able to charge the primary battery and the secondary battery. 7. The display system of claim 1, wherein the battery controller is able to measure and display remaining time left for either the primary battery or the secondary battery. 8. The display system of claim 1, wherein the system further comprising an external wireless receiver and an external wireless transmitter. 9. The display system of claim 1, wherein the system further comprises a battery power indicator configure to indicate when the power level of the primary and the secondary battery. 10. The display system of claim 1, wherein the system further comprises an alert indicator adapted to indicate when power is switched from the primary battery to the secondary battery. 11. The display system of claim 1, wherein the system further comprises at least one medically-used monitoring, sampling, or analyses component in wired or wireless communication with the display, wireless receiver and transmitter, and the battery assembly. 12. The display system of claim 1, wherein the mobile stand is supported by at least three wheels. 13. A cordless wireless mobile display system for use in an surgical procedure room comprising: a display; a wireless receiver and a wireless transmitter communicating with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; at least one medically-used monitoring, sampling, or analyses component in wired or wireless communication with the display, wireless receiver and transmitter, and the battery assembly; a battery controller, wherein the battery controller is able to: monitor and detect the power output from the primary battery and the secondary battery; switch power from the primary battery to the secondary battery within ten milliseconds when the primary battery's output falls below the predetermined output value; charge the primary battery and the secondary battery; and measure and display remaining time left for either the primary battery or the secondary battery a battery charging station; and a mobile stand having wheels for supporting wireless display, the components, the battery assembly, and the battery charging station. 14. The display system of claim 13, wherein the system further comprises a battery power indicator adapted to show the power level of the primary and the secondary battery. 15. The display system of claim 14, wherein the system further comprising an alert indicator configured to indicate when power is switched from the primary battery to the secondary battery. 16. The display system of claim 13, wherein the system further comprising a USB connection transferring information on the battery assembly status. 17. The display system of claim 13, wherein a communication received or transmitted by the display system occurs in real time. 18. The display system of claim 17, wherein the communication received or transmitted is a video image. 19. A cordless wireless mobile display system for use in a surgical procedure room comprising: a display; a wireless receiver and a wireless transmitter in communication with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; a battery controller; a battery charging station; and a stand for keeping the display upright. 20. A cordless wireless mobile display system for use in an surgical procedure room comprising: a display; a wireless receiver and a wireless receiver communicating with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; at least one medically-used monitoring, sampling, or analyses component in wired or wireless communication with the display, wireless receiver and transmitter, and the battery assembly; a battery controller, wherein the battery controller is able to: monitor and detect the power output from the primary battery and the secondary battery; switch power from the primary battery to the secondary battery within ten milliseconds when the primary battery's output falls below the predetermined output value; charge the primary battery and the secondary battery; and measure and display remaining time left for either the primary battery or the secondary battery a battery charging station; and a stand for keeping the display upright.
A cordless wireless and mobile display system having an integrated system for delivering uninterrupted power to a display is described. The cordless wireless and mobile display includes wireless capabilities that allows it to receive and transmit information from associated medical devices and to send data to other designated devices. A battery assembly includes a primary and secondary battery that supplies uninterrupted power to a display. Sensors periodically sample the power output from the battery in current use to ensure that enough power is available to the display. When sensors detect power output levels have fallen below a preset value, indicators alert users that the system is switching to a backup power supply and that the primary battery supply needs recharging.1. A cordless wireless mobile display system for use in a surgical procedure room comprising: a display; a wireless receiver and a wireless transmitter in communication with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; a battery controller; a battery charging station; and a mobile stand supporting the display, the receiver, the transmitter, the battery assembly, and the battery charging station. 2. The display system of claim 1, wherein the wireless receiver and the wireless transmitter is able to transmit and receive information in real time. 3. The display system of claim 1, wherein the battery controller is able to monitor the power output from the primary battery and the secondary battery. 4. The display system of claim 1, wherein the battery controller is able to switch power from the primary battery to the secondary battery when the primary battery's output falls below the predetermined output value. 5. The display system of claim 4, wherein the battery controller is able to switch power from the primary battery to the secondary battery within ten milliseconds once the primary battery power output falls below the predetermined output value. 6. The display system of claim 1, wherein the battery controller is able to charge the primary battery and the secondary battery. 7. The display system of claim 1, wherein the battery controller is able to measure and display remaining time left for either the primary battery or the secondary battery. 8. The display system of claim 1, wherein the system further comprising an external wireless receiver and an external wireless transmitter. 9. The display system of claim 1, wherein the system further comprises a battery power indicator configure to indicate when the power level of the primary and the secondary battery. 10. The display system of claim 1, wherein the system further comprises an alert indicator adapted to indicate when power is switched from the primary battery to the secondary battery. 11. The display system of claim 1, wherein the system further comprises at least one medically-used monitoring, sampling, or analyses component in wired or wireless communication with the display, wireless receiver and transmitter, and the battery assembly. 12. The display system of claim 1, wherein the mobile stand is supported by at least three wheels. 13. A cordless wireless mobile display system for use in an surgical procedure room comprising: a display; a wireless receiver and a wireless transmitter communicating with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; at least one medically-used monitoring, sampling, or analyses component in wired or wireless communication with the display, wireless receiver and transmitter, and the battery assembly; a battery controller, wherein the battery controller is able to: monitor and detect the power output from the primary battery and the secondary battery; switch power from the primary battery to the secondary battery within ten milliseconds when the primary battery's output falls below the predetermined output value; charge the primary battery and the secondary battery; and measure and display remaining time left for either the primary battery or the secondary battery a battery charging station; and a mobile stand having wheels for supporting wireless display, the components, the battery assembly, and the battery charging station. 14. The display system of claim 13, wherein the system further comprises a battery power indicator adapted to show the power level of the primary and the secondary battery. 15. The display system of claim 14, wherein the system further comprising an alert indicator configured to indicate when power is switched from the primary battery to the secondary battery. 16. The display system of claim 13, wherein the system further comprising a USB connection transferring information on the battery assembly status. 17. The display system of claim 13, wherein a communication received or transmitted by the display system occurs in real time. 18. The display system of claim 17, wherein the communication received or transmitted is a video image. 19. A cordless wireless mobile display system for use in a surgical procedure room comprising: a display; a wireless receiver and a wireless transmitter in communication with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; a battery controller; a battery charging station; and a stand for keeping the display upright. 20. A cordless wireless mobile display system for use in an surgical procedure room comprising: a display; a wireless receiver and a wireless receiver communicating with the display; a battery assembly that provides power to the display, the battery assembly comprising: a primary battery, wherein the primary battery provides power to the display under normal operation; and a secondary battery wherein the secondary battery is able to provide power to the cordless mobile display system upon the primary battery's output falls below a predetermined output value; at least one medically-used monitoring, sampling, or analyses component in wired or wireless communication with the display, wireless receiver and transmitter, and the battery assembly; a battery controller, wherein the battery controller is able to: monitor and detect the power output from the primary battery and the secondary battery; switch power from the primary battery to the secondary battery within ten milliseconds when the primary battery's output falls below the predetermined output value; charge the primary battery and the secondary battery; and measure and display remaining time left for either the primary battery or the secondary battery a battery charging station; and a stand for keeping the display upright.
2,600
10,271
10,271
15,155,886
2,656
An aspect provides a method, including: receiving, at an input component of an information handling device, user input comprising one or more words; identifying, using a processor of the information handling device, an emotion associated with the one or more words; creating, using the processor, an emotion tag including the emotion associated with the one or more words; storing the emotion tag in a memory; analyzing one or more emotion tags; and modifying an operation of an application based on the analyzing. Other embodiments are described and claimed.
1. A method, comprising: receiving, at an input component of an information handling device, user input comprising one or more words; identifying, using a processor of the information handling device, an emotion associated with the one or more words; creating, using the processor, an emotion tag including the emotion associated with the one or more words; storing the emotion tag in a memory; analyzing one or more emotion tags; and modifying an operation of an application based on the analyzing. 2. The method of claim 1, further comprising: modifying the user input based on the analyzing. 3. The method of claim 2, wherein modifying the user input comprises changing the visual rendering of the user input. 4. The method of claim 1, wherein the storing of the emotion tag in a memory occurs remote from the information handling device. 5. The method of claim 1, wherein the storing of the emotion tag in a memory occurs locally to the information handling device. 6. The method of claim 1, wherein: the user input comprises speech input; and the identifying an emotion associated with the one or more words comprises using an acoustic characteristic of the speech input to identify an emotion. 7. The method of claim 6, further comprising: receiving additional speech input; wherein the using an acoustic characteristic of the speech input to identify an emotion comprises comparing an acoustic characteristic of the speech input to an acoustic characteristic of the additional speech input. 8. The method of claim 1, wherein the modifying an operation of an application comprises supplementing a search application result utilizing emotion tag searching. 9. The method of claim 1, wherein the modifying an operation of an application comprises providing a prompt to a user prior to sending a message including the user input. 10. The method of claim 1, wherein the modifying an operation of an application comprises assigning a priority level to a message including the user input. 11. An information handling device, comprising: an input component; a processor; a memory device assessable to the processor and storing code executable by the processor to: receive, at an input component, user input comprising one or more words; identify an emotion associated with the one or more words; create an emotion tag including the emotion associated with the one or more words; store the emotion tag in a memory; analyze one or more emotion tags; and modify an operation of an application based on the analyzing. 12. The information handling device of claim 11, wherein the code is further executable by the processors to: modify the user input based on the analyzing. 13. The information handling device of claim 12, wherein to modify the user input comprises changing the visual rendering of the user input. 14. The information handling device of claim 11, wherein storing of the emotion tag in a memory occurs remote from the information handling device. 15. The information handling device of claim 11, wherein storing of the emotion tag in a memory occurs locally to the information handling device. 16. The information handling device of claim 11, wherein: the user input comprises speech input; and to identify an emotion associated with the one or more words comprises using an acoustic characteristic of the speech input to identify an emotion. 17. The information handling device of claim 16, wherein the code is further executable by the processor to: receive additional speech input; wherein the using an acoustic characteristic of the speech input to identify an emotion comprises comparing an acoustic characteristic of the speech input to an acoustic characteristic of the additional speech input. 18. The information handling device of claim 11, wherein to modify an operation of an application comprises supplementing a search application result utilizing emotion tag searching. 19. The information handling device of claim 11, wherein to modify an operation of an application comprises providing a prompt to a user prior to sending a message including the user input. 20. A program product, comprising: a storage device having computer readable program code stored therewith, the computer readable program code being executable by a processor and comprising: computer readable program code that receives, at an input component of an information handling device, user input comprising one or more words; computer readable program code that identifies, using a processor of the information handling device, an emotion associated with the one or more words; computer readable program code that creates, using the processor, an emotion tag including the emotion associated with the one or more words; computer readable program code that stores the emotion tag in a memory analyze one or more emotion tags; and computer readable program code that modifies an operation of an application based on the analyzing.
An aspect provides a method, including: receiving, at an input component of an information handling device, user input comprising one or more words; identifying, using a processor of the information handling device, an emotion associated with the one or more words; creating, using the processor, an emotion tag including the emotion associated with the one or more words; storing the emotion tag in a memory; analyzing one or more emotion tags; and modifying an operation of an application based on the analyzing. Other embodiments are described and claimed.1. A method, comprising: receiving, at an input component of an information handling device, user input comprising one or more words; identifying, using a processor of the information handling device, an emotion associated with the one or more words; creating, using the processor, an emotion tag including the emotion associated with the one or more words; storing the emotion tag in a memory; analyzing one or more emotion tags; and modifying an operation of an application based on the analyzing. 2. The method of claim 1, further comprising: modifying the user input based on the analyzing. 3. The method of claim 2, wherein modifying the user input comprises changing the visual rendering of the user input. 4. The method of claim 1, wherein the storing of the emotion tag in a memory occurs remote from the information handling device. 5. The method of claim 1, wherein the storing of the emotion tag in a memory occurs locally to the information handling device. 6. The method of claim 1, wherein: the user input comprises speech input; and the identifying an emotion associated with the one or more words comprises using an acoustic characteristic of the speech input to identify an emotion. 7. The method of claim 6, further comprising: receiving additional speech input; wherein the using an acoustic characteristic of the speech input to identify an emotion comprises comparing an acoustic characteristic of the speech input to an acoustic characteristic of the additional speech input. 8. The method of claim 1, wherein the modifying an operation of an application comprises supplementing a search application result utilizing emotion tag searching. 9. The method of claim 1, wherein the modifying an operation of an application comprises providing a prompt to a user prior to sending a message including the user input. 10. The method of claim 1, wherein the modifying an operation of an application comprises assigning a priority level to a message including the user input. 11. An information handling device, comprising: an input component; a processor; a memory device assessable to the processor and storing code executable by the processor to: receive, at an input component, user input comprising one or more words; identify an emotion associated with the one or more words; create an emotion tag including the emotion associated with the one or more words; store the emotion tag in a memory; analyze one or more emotion tags; and modify an operation of an application based on the analyzing. 12. The information handling device of claim 11, wherein the code is further executable by the processors to: modify the user input based on the analyzing. 13. The information handling device of claim 12, wherein to modify the user input comprises changing the visual rendering of the user input. 14. The information handling device of claim 11, wherein storing of the emotion tag in a memory occurs remote from the information handling device. 15. The information handling device of claim 11, wherein storing of the emotion tag in a memory occurs locally to the information handling device. 16. The information handling device of claim 11, wherein: the user input comprises speech input; and to identify an emotion associated with the one or more words comprises using an acoustic characteristic of the speech input to identify an emotion. 17. The information handling device of claim 16, wherein the code is further executable by the processor to: receive additional speech input; wherein the using an acoustic characteristic of the speech input to identify an emotion comprises comparing an acoustic characteristic of the speech input to an acoustic characteristic of the additional speech input. 18. The information handling device of claim 11, wherein to modify an operation of an application comprises supplementing a search application result utilizing emotion tag searching. 19. The information handling device of claim 11, wherein to modify an operation of an application comprises providing a prompt to a user prior to sending a message including the user input. 20. A program product, comprising: a storage device having computer readable program code stored therewith, the computer readable program code being executable by a processor and comprising: computer readable program code that receives, at an input component of an information handling device, user input comprising one or more words; computer readable program code that identifies, using a processor of the information handling device, an emotion associated with the one or more words; computer readable program code that creates, using the processor, an emotion tag including the emotion associated with the one or more words; computer readable program code that stores the emotion tag in a memory analyze one or more emotion tags; and computer readable program code that modifies an operation of an application based on the analyzing.
2,600
10,272
10,272
14,561,711
2,616
Systems and methods may provide for receiving a pixel shader and sending the pixel shader to shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. In one example, the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition.
1. A system comprising: a data interface including one or more of a network controller, a memory controller or a bus, the data interface to obtain a pixel shader associated with a scene; shader bypass hardware; and a host processor including a shader compiler to receive the pixel shader, the shader compiler having a shader disabler to prepare the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 2. The system of claim 1, wherein the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 3. The system of claim 1, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 4. The system of claim 1, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 5. The system of claim 4, wherein the predetermined precision threshold is a floating point precision threshold. 6. The system of claim 1, further including a graphics processor pipeline, wherein the shader compiler further includes a shader enabler to prepare the pixel shader for execution by the graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition. 7. An apparatus comprising: shader bypass hardware; and a host processor including a shader compiler to receive a pixel shader, the shader compiler having a shader disabler to prepare the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 8. The apparatus of claim 7, wherein the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 9. The apparatus of claim 7, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 10. The apparatus of claim 7, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 11. The apparatus of claim 10, wherein the predetermined precision threshold is a floating point precision threshold. 12. The apparatus of claim 7, wherein the shader compiler further includes a shader enabler to prepare the pixel shader for execution by a graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition. 13. A method comprising: receiving a pixel shader; and preparing the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 14. The method of claim 13, wherein the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 15. The method of claim 13, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 16. The method of claim 13, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 17. The method of claim 16, wherein the predetermined precision threshold is a floating point precision threshold. 18. The method of claim 13, further including preparing the pixel shader for execution by a graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition. 19. At least one computer readable storage medium comprising a set of instructions which, when executed by a computing device, cause the computing device to: receive a pixel shader; and prepare the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 20. The at least one computer readable storage medium of claim 19, wherein the shader bypass hardware is to be dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 21. The at least one computer readable storage medium of claim 19, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 22. The at least one computer readable storage medium of claim 19, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 23. The at least one computer readable storage medium of claim 22, wherein the predetermined precision threshold is a floating point precision threshold. 24. The at least one computer readable storage medium of claim 19, wherein the instructions, when executed, cause a computing device to prepare the pixel shader for execution by a graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition.
Systems and methods may provide for receiving a pixel shader and sending the pixel shader to shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. In one example, the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition.1. A system comprising: a data interface including one or more of a network controller, a memory controller or a bus, the data interface to obtain a pixel shader associated with a scene; shader bypass hardware; and a host processor including a shader compiler to receive the pixel shader, the shader compiler having a shader disabler to prepare the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 2. The system of claim 1, wherein the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 3. The system of claim 1, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 4. The system of claim 1, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 5. The system of claim 4, wherein the predetermined precision threshold is a floating point precision threshold. 6. The system of claim 1, further including a graphics processor pipeline, wherein the shader compiler further includes a shader enabler to prepare the pixel shader for execution by the graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition. 7. An apparatus comprising: shader bypass hardware; and a host processor including a shader compiler to receive a pixel shader, the shader compiler having a shader disabler to prepare the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 8. The apparatus of claim 7, wherein the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 9. The apparatus of claim 7, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 10. The apparatus of claim 7, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 11. The apparatus of claim 10, wherein the predetermined precision threshold is a floating point precision threshold. 12. The apparatus of claim 7, wherein the shader compiler further includes a shader enabler to prepare the pixel shader for execution by a graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition. 13. A method comprising: receiving a pixel shader; and preparing the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 14. The method of claim 13, wherein the shader bypass hardware is dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 15. The method of claim 13, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 16. The method of claim 13, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 17. The method of claim 16, wherein the predetermined precision threshold is a floating point precision threshold. 18. The method of claim 13, further including preparing the pixel shader for execution by a graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition. 19. At least one computer readable storage medium comprising a set of instructions which, when executed by a computing device, cause the computing device to: receive a pixel shader; and prepare the pixel shader for execution on the shader bypass hardware if the pixel shader and a render target associated with the pixel shader satisfy a simplicity condition. 20. The at least one computer readable storage medium of claim 19, wherein the shader bypass hardware is to be dedicated to pixel shaders and associated render targets that satisfy the simplicity condition. 21. The at least one computer readable storage medium of claim 19, wherein the simplicity condition includes one or more operations of the pixel shader corresponding to a predetermined operation type. 22. The at least one computer readable storage medium of claim 19, wherein the simplicity condition includes a precision level of the render target being below a predetermined precision threshold. 23. The at least one computer readable storage medium of claim 22, wherein the predetermined precision threshold is a floating point precision threshold. 24. The at least one computer readable storage medium of claim 19, wherein the instructions, when executed, cause a computing device to prepare the pixel shader for execution by a graphics processor pipeline if the pixel shader and the render target do not satisfy the simplicity condition.
2,600
10,273
10,273
15,312,438
2,683
The present invention relates to an automatic wireless Bluetooth®-enabled garage door opener, gate opener or other automatic door opening system and method. Specifically, the door opener system and method comprises a Bluetooth wireless receiver interconnected to a low voltage wired garage door opening system, and a control application on a smart device, such as a smart phone, tablet, or other like Bluetooth®-enabled wireless transmitter.
1. A smart device comprising: an app for controlling, via Bluetooth protocol, an opener for a door by sending a Bluetooth signal to a receiver to trigger the opening or closing of the door.
The present invention relates to an automatic wireless Bluetooth®-enabled garage door opener, gate opener or other automatic door opening system and method. Specifically, the door opener system and method comprises a Bluetooth wireless receiver interconnected to a low voltage wired garage door opening system, and a control application on a smart device, such as a smart phone, tablet, or other like Bluetooth®-enabled wireless transmitter.1. A smart device comprising: an app for controlling, via Bluetooth protocol, an opener for a door by sending a Bluetooth signal to a receiver to trigger the opening or closing of the door.
2,600
10,274
10,274
14,644,236
2,665
A circuit arrangement is provided, having an antenna, a first circuit coupled to the antenna and configured to receive an antenna signal, which contains first data, and to process the antenna signal, as a result of which it produces a processed antenna signal having a different voltage level than that of the antenna signal. The first data are data transmitted from a communication device to the arrangement. The arrangement further includes a second circuit coupled to the first circuit via a wire-based interface. The first circuit is configured to transmit the processed antenna signal to the second circuit by a first channel of the wire-based interface and the second circuit is configured to transmit second data to the first circuit by a second channel of the wire-based interface. The first channel and the second channel are different. The second data are configuration data for configuring the first circuit.
1. A circuit arrangement. comprising: an antenna; a first circuit that is coupled to the antenna and is set up to receive an antenna signal, which contains first data, from the antenna and to process the antenna signal, as a result of which it produces a processed antenna signal having a different voltage level than that of the antenna signal, wherein the first data are data transmitted from a communication device to the circuit arrangement; a second circuit that is coupled to the first circuit via a wire-based interface; wherein the first circuit is set up to transmit the processed antenna signal to the second circuit by a first communication channel of the wire-based interface; and wherein the second circuit is set up to transmit second data to the first circuit by a second communication channel of the wire-based interface, wherein the first communication channel and the second communication channel are different, wherein the second data are configuration data for configuring the first circuit. 2. The circuit arrangement as claimed in claim 1, wherein the second circuit is set up to transmit third data to the first circuit by the first communication channel of the wire-based interface. 3. The circuit arrangement as claimed in claim 2, wherein the third data are data to be transmitted from the first circuit to a communication device by the antenna. 4. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in an initial sequence used for a data transmission. 5. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in a carrier frequency used for a data transmission. 6. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in a pulse frequency used for a data transmission. 7. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in an amplitude used for a data transmission. 8. The circuit arrangement as claimed in claim 1, wherein the second communication channel is set up as a bidirectional communication channel. 9. The circuit arrangement as claimed in claim 1, wherein the first circuit is a radio transmission front end. 10. The circuit arrangement as claimed in claim 1, wherein the first circuit has a phase locked loop that is synchronized to the communication device. 11. The circuit arrangement as claimed in claim 1, wherein the other voltage level is a higher voltage level. 12. The circuit arrangement as claimed in claim 1, wherein the second circuit is a secure element. 13. The circuit arrangement as claimed in claim 1, wherein the second circuit is a near field communication security element. 14. The circuit arrangement as claimed in claim 1, further comprising: a Subscriber Identity Module card or a microSD card that contains the second circuit. 15. The circuit arrangement as claimed in claim 1, wherein the first circuit is set up to detect whether the first data are addressed to the circuit arrangement and is set up to transmit the first data to the second circuit if the first data are addressed to the circuit arrangement. 16. The circuit arrangement as claimed in claim 1, wherein the first circuit is set up for wireless communication based on ISO/IEC 14443 by the antenna. 17. The circuit arrangement as claimed in claim 1, wherein the first circuit implements a portion of the logic required for wireless communication based on ISO/IEC 14443. 18. The circuit arrangement as claimed in claim 1, wherein the interface implements a digital contactless bridge interface or an advanced contactless bridge interface.
A circuit arrangement is provided, having an antenna, a first circuit coupled to the antenna and configured to receive an antenna signal, which contains first data, and to process the antenna signal, as a result of which it produces a processed antenna signal having a different voltage level than that of the antenna signal. The first data are data transmitted from a communication device to the arrangement. The arrangement further includes a second circuit coupled to the first circuit via a wire-based interface. The first circuit is configured to transmit the processed antenna signal to the second circuit by a first channel of the wire-based interface and the second circuit is configured to transmit second data to the first circuit by a second channel of the wire-based interface. The first channel and the second channel are different. The second data are configuration data for configuring the first circuit.1. A circuit arrangement. comprising: an antenna; a first circuit that is coupled to the antenna and is set up to receive an antenna signal, which contains first data, from the antenna and to process the antenna signal, as a result of which it produces a processed antenna signal having a different voltage level than that of the antenna signal, wherein the first data are data transmitted from a communication device to the circuit arrangement; a second circuit that is coupled to the first circuit via a wire-based interface; wherein the first circuit is set up to transmit the processed antenna signal to the second circuit by a first communication channel of the wire-based interface; and wherein the second circuit is set up to transmit second data to the first circuit by a second communication channel of the wire-based interface, wherein the first communication channel and the second communication channel are different, wherein the second data are configuration data for configuring the first circuit. 2. The circuit arrangement as claimed in claim 1, wherein the second circuit is set up to transmit third data to the first circuit by the first communication channel of the wire-based interface. 3. The circuit arrangement as claimed in claim 2, wherein the third data are data to be transmitted from the first circuit to a communication device by the antenna. 4. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in an initial sequence used for a data transmission. 5. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in a carrier frequency used for a data transmission. 6. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in a pulse frequency used for a data transmission. 7. The circuit arrangement as claimed in claim 1, wherein the first communication channel and the second communication channel differ in an amplitude used for a data transmission. 8. The circuit arrangement as claimed in claim 1, wherein the second communication channel is set up as a bidirectional communication channel. 9. The circuit arrangement as claimed in claim 1, wherein the first circuit is a radio transmission front end. 10. The circuit arrangement as claimed in claim 1, wherein the first circuit has a phase locked loop that is synchronized to the communication device. 11. The circuit arrangement as claimed in claim 1, wherein the other voltage level is a higher voltage level. 12. The circuit arrangement as claimed in claim 1, wherein the second circuit is a secure element. 13. The circuit arrangement as claimed in claim 1, wherein the second circuit is a near field communication security element. 14. The circuit arrangement as claimed in claim 1, further comprising: a Subscriber Identity Module card or a microSD card that contains the second circuit. 15. The circuit arrangement as claimed in claim 1, wherein the first circuit is set up to detect whether the first data are addressed to the circuit arrangement and is set up to transmit the first data to the second circuit if the first data are addressed to the circuit arrangement. 16. The circuit arrangement as claimed in claim 1, wherein the first circuit is set up for wireless communication based on ISO/IEC 14443 by the antenna. 17. The circuit arrangement as claimed in claim 1, wherein the first circuit implements a portion of the logic required for wireless communication based on ISO/IEC 14443. 18. The circuit arrangement as claimed in claim 1, wherein the interface implements a digital contactless bridge interface or an advanced contactless bridge interface.
2,600
10,275
10,275
15,092,082
2,689
A hospital bed is configured to monitor data from a second patient support based on the one or more alarms set by the user. The hospital bed detects whether an alarm triggering event occurred based on the monitored data. In response to a determination that the alarm triggering event occurred, the hospital bed will provide a signal indicative of the alarm triggering event to a nurse call system.
1. A patient support for use in a healthcare facility having at least one secondary patient support, the patient support comprising: a display operable to render a graphical user interface (GUI) to interface with a user and allow the user to set one or more alarms, wherein each alarm corresponds to an alarm triggering event triggered by an action of a patient; communication circuitry to communicatively couple the patient support to a healthcare communication system and the patient support to the at least one secondary patient support; and a control system to monitor data of the patient support and the at least one secondary patient support based on the set alarms, detect whether an alarm triggering event occurred at either the patient support or the at least one secondary patient support based on the monitored data, and, in response to a determination that the alarm triggering event occurred, provide a signal indicative of the alarm triggering event to the healthcare communication system. 2. The patient support of claim 1, wherein the control system of the patient support determines, from information provided by the secondary patient support, whether a triggering event has occurred at the secondary patient support and provides a signal indicative that the alarm triggering event occurred at the at least one secondary patient support. 3. The patient support of claim 2, wherein the at least one secondary patient support comprises at least one of a chair, a toilet, a stretcher, and a lift. 4. The patient support of claim 2, wherein the at least one secondary patient support comprises a chair, and wherein the alarms include a chair exit alarm to trigger an alarm in response to a determination that a patient previously sitting on the chair is not presently sitting on the chair. 5. The patient support of claim 2, wherein the at least one secondary patient support comprises a toilet, and wherein the alarms include at least one of an on-toilet alarm to trigger an alarm in response to a determination that a patient has sat on the toilet and an off-toilet alarm to trigger an alarm in response to a determination that a patient previously sitting on the toilet is not presently sitting on the toilet. 6. The patient support of claim 2, wherein the at least one secondary patient support comprises a stretcher, and wherein the alarms include at least one of a position alarm to trigger an alarm in response to a determination that a present position of the stretcher is in a predetermined position, an exiting alarm to trigger an alarm in response to a determination that the patient is detected as attempting to exit the stretcher, and an out of stretcher alarm to trigger an alarm in response to a determination that the patient has exited the stretcher. 7. The patient support of claim 2, wherein the GUI is further configured to allow the user to control functions of the patient support. 8. The patient support of claim 7, wherein the GUI is further configured to facilitate a wireless network connection between the patient support and the at least one secondary patient support. 9. The patient support of claim 8, wherein the wireless network connection comprises a Bluetooth network connection. 10. The patient support of claim 1, wherein the GUI is further configured to facilitate a wireless network connection between the patient support and the at least one secondary patient support. 11. The patient support of claim 9, wherein the GUI is further configured to provide, based on the alarm settings, at least one of a visual indication and an audible noise that the alarm triggering event was detected. 12. The patient support of claim 11, wherein the data comprises a present weight being applied to the patient support and the at least one secondary patient support. 13. The patient support of claim 1, wherein the data comprises a present weight being applied to the patient support and the at least one secondary patient support. 14. The patient support of claim 1, wherein the event capable of triggering the alarm triggering event includes a change in position on the at least one secondary patient support while maintaining contact with the at least one secondary patient support. 15. The patient support of claim 1, wherein the event capable of triggering the alarm triggering event includes a change in position on the patient support while maintaining contact with the patient support. 16. The patient support of claim 1, wherein the one or more alarms comprise at least one of one or more alarms of the patient support and one or more alarms of the at least one secondary patient support. 17. The patient support of claim 16, wherein the GUI is further configured to provide, based on the alarm settings, at least one of a visual indication and an audible noise that the alarm triggering event was detected. 18. The patient support of claim 1, wherein the GUI is further configured to provide, based on the alarm settings, at least one of a visual indication and an audible noise that the alarm triggering event was detected. 19. The patient support of claim 1, wherein the patient support is a hospital bed. 20. The patient support of claim 19, wherein the event capable of triggering the alarm triggering event includes a change in position on the at least one secondary patient support while maintaining contact with the at least one secondary patient support.
A hospital bed is configured to monitor data from a second patient support based on the one or more alarms set by the user. The hospital bed detects whether an alarm triggering event occurred based on the monitored data. In response to a determination that the alarm triggering event occurred, the hospital bed will provide a signal indicative of the alarm triggering event to a nurse call system.1. A patient support for use in a healthcare facility having at least one secondary patient support, the patient support comprising: a display operable to render a graphical user interface (GUI) to interface with a user and allow the user to set one or more alarms, wherein each alarm corresponds to an alarm triggering event triggered by an action of a patient; communication circuitry to communicatively couple the patient support to a healthcare communication system and the patient support to the at least one secondary patient support; and a control system to monitor data of the patient support and the at least one secondary patient support based on the set alarms, detect whether an alarm triggering event occurred at either the patient support or the at least one secondary patient support based on the monitored data, and, in response to a determination that the alarm triggering event occurred, provide a signal indicative of the alarm triggering event to the healthcare communication system. 2. The patient support of claim 1, wherein the control system of the patient support determines, from information provided by the secondary patient support, whether a triggering event has occurred at the secondary patient support and provides a signal indicative that the alarm triggering event occurred at the at least one secondary patient support. 3. The patient support of claim 2, wherein the at least one secondary patient support comprises at least one of a chair, a toilet, a stretcher, and a lift. 4. The patient support of claim 2, wherein the at least one secondary patient support comprises a chair, and wherein the alarms include a chair exit alarm to trigger an alarm in response to a determination that a patient previously sitting on the chair is not presently sitting on the chair. 5. The patient support of claim 2, wherein the at least one secondary patient support comprises a toilet, and wherein the alarms include at least one of an on-toilet alarm to trigger an alarm in response to a determination that a patient has sat on the toilet and an off-toilet alarm to trigger an alarm in response to a determination that a patient previously sitting on the toilet is not presently sitting on the toilet. 6. The patient support of claim 2, wherein the at least one secondary patient support comprises a stretcher, and wherein the alarms include at least one of a position alarm to trigger an alarm in response to a determination that a present position of the stretcher is in a predetermined position, an exiting alarm to trigger an alarm in response to a determination that the patient is detected as attempting to exit the stretcher, and an out of stretcher alarm to trigger an alarm in response to a determination that the patient has exited the stretcher. 7. The patient support of claim 2, wherein the GUI is further configured to allow the user to control functions of the patient support. 8. The patient support of claim 7, wherein the GUI is further configured to facilitate a wireless network connection between the patient support and the at least one secondary patient support. 9. The patient support of claim 8, wherein the wireless network connection comprises a Bluetooth network connection. 10. The patient support of claim 1, wherein the GUI is further configured to facilitate a wireless network connection between the patient support and the at least one secondary patient support. 11. The patient support of claim 9, wherein the GUI is further configured to provide, based on the alarm settings, at least one of a visual indication and an audible noise that the alarm triggering event was detected. 12. The patient support of claim 11, wherein the data comprises a present weight being applied to the patient support and the at least one secondary patient support. 13. The patient support of claim 1, wherein the data comprises a present weight being applied to the patient support and the at least one secondary patient support. 14. The patient support of claim 1, wherein the event capable of triggering the alarm triggering event includes a change in position on the at least one secondary patient support while maintaining contact with the at least one secondary patient support. 15. The patient support of claim 1, wherein the event capable of triggering the alarm triggering event includes a change in position on the patient support while maintaining contact with the patient support. 16. The patient support of claim 1, wherein the one or more alarms comprise at least one of one or more alarms of the patient support and one or more alarms of the at least one secondary patient support. 17. The patient support of claim 16, wherein the GUI is further configured to provide, based on the alarm settings, at least one of a visual indication and an audible noise that the alarm triggering event was detected. 18. The patient support of claim 1, wherein the GUI is further configured to provide, based on the alarm settings, at least one of a visual indication and an audible noise that the alarm triggering event was detected. 19. The patient support of claim 1, wherein the patient support is a hospital bed. 20. The patient support of claim 19, wherein the event capable of triggering the alarm triggering event includes a change in position on the at least one secondary patient support while maintaining contact with the at least one secondary patient support.
2,600
10,276
10,276
15,761,959
2,675
In one implementation, a system for interpolating pixel values includes a receiver engine to receive a plurality of scan lines each captured using only two of three different colors of lights from a scanner, wherein each scan line comprises received pixel values for the two different colors of lights used to illuminate the plurality of scan lines, an interpolate engine to interpolate, using the received pixel values for the two different colors, pixel values for the third one of the three different colors by for each scan line to result in a color map, and a display engine to cause to display information relating to an image including the received pixel values and the interpolated pixel values for the three different colors for the plurality of scan lines.
1. A system, comprising: a receiver engine to receive a plurality of scan lines each captured using only two of three different colors of lights from a scanner, wherein each scan line comprises received pixel values for the two different colors of lights used to illuminate the plurality of scan lines; an interpolate engine to interpolate, using the received pixel values for the two different colors, pixel values for the third one of the three different colors for each scan line to result in a color map; and a display engine to cause to display information relating to an image including the received pixel values and the interpolated pixel values for the three different colors for the plurality of scan lines. 2. The system of claim 1, wherein the interpolate engine determines the pixel values for the third color by weighted interpolation of the pixel values of the two colors with respect to the pixel values of the third color. 3. The system of claim 1, wherein the interpolated pixel values are weighted by the color map. 4. The system of claim 1, further including an edge map engine to determine pixels adjacent to a pixel on an edge of the image. 5. The system of claim 1, wherein the interpolate engine uses red and green colors for the pixel values of each scan line to interpolate pixel values for a blue color for each scan line. 6. The system of claim 1, wherein the interpolate engine uses green, and blue colors for the pixel values of each scan line to interpolate pixel values for a red color for each scan line. 7. A method, comprising: receiving, at a computing device, a plurality of scan lines each captured using only two of three different colors of lights from a scanner, wherein capturing the plurality of scan lines includes: illuminating a first scan line with a reference color and with a first of two alternate colors of the three different colors; and illuminating a second scan line with the reference color and with a second of the two alternate colors of the three different colors; interpolating pixel values for the second of the two alternate colors for the first scan line using pixel values of the reference color and pixel values of the first of the two alternate colors; interpolating pixel values for the first of the two alternate colors for the second scan line using pixel values of the reference color and pixel values of the second of the two alternate colors; and causing to display information relating to an image including the pixel values for the first of two alternate colors, the second of two alternate colors, and the reference color for the plurality of scan lines. 8. The method of claim 7, wherein the reference color is green, the first of the two alternate colors is red, and wherein the pixel values for the red color for the first scan line are weighted by an interpolation using a red, green, blue (RGB) color map. 9. The method of claim 7, wherein the reference color is green, the second of the two alternate colors is blue, and wherein the pixel values for the blue color for the second scan line are weighted by an interpolation using a red, green, blue (RGB) color map. 10. The method of claim 7, wherein the method includes determining a plurality of pixels adjacent to a pixel on an edge of the image by an edge map. 11. The method of claim 10, wherein the method includes interpolating the pixel value for the first or the second of the two alternate colors for the pixel on the edge of the image using the plurality of adjacent pixels. 12. A non-transitory computer readable medium storing instructions executable by a processing resource to cause a computing device to: receive, from a scanner, a plurality of scan lines each captured using a green color and a red or a blue color of lights from the scanner; interpolate, using the green color and the red color, pixel values for the blue color of a first scan line; interpolate, using the green color and the blue color, pixel values for the red color of a second scan line; and cause to display, via a user interface, information related to an image including the plurality of scan lines. 13. The medium of claim 12, comprising instructions to interpolate the pixel values for the blue color of the first scan line and the pixel values for the red color of the second scan line by bilinear interpolation. 14. The medium of claim 12, comprising instructions to interpolate the pixel values for the blue color of the first scan line and the pixel values for the red color of the second scan line by bicubic interpolation. 15. The medium of claim 12, comprising instructions to interpolate the pixel values for the blue color of the first scan line and the pixel values for the red color of the second scan line by spline based interpolation.
In one implementation, a system for interpolating pixel values includes a receiver engine to receive a plurality of scan lines each captured using only two of three different colors of lights from a scanner, wherein each scan line comprises received pixel values for the two different colors of lights used to illuminate the plurality of scan lines, an interpolate engine to interpolate, using the received pixel values for the two different colors, pixel values for the third one of the three different colors by for each scan line to result in a color map, and a display engine to cause to display information relating to an image including the received pixel values and the interpolated pixel values for the three different colors for the plurality of scan lines.1. A system, comprising: a receiver engine to receive a plurality of scan lines each captured using only two of three different colors of lights from a scanner, wherein each scan line comprises received pixel values for the two different colors of lights used to illuminate the plurality of scan lines; an interpolate engine to interpolate, using the received pixel values for the two different colors, pixel values for the third one of the three different colors for each scan line to result in a color map; and a display engine to cause to display information relating to an image including the received pixel values and the interpolated pixel values for the three different colors for the plurality of scan lines. 2. The system of claim 1, wherein the interpolate engine determines the pixel values for the third color by weighted interpolation of the pixel values of the two colors with respect to the pixel values of the third color. 3. The system of claim 1, wherein the interpolated pixel values are weighted by the color map. 4. The system of claim 1, further including an edge map engine to determine pixels adjacent to a pixel on an edge of the image. 5. The system of claim 1, wherein the interpolate engine uses red and green colors for the pixel values of each scan line to interpolate pixel values for a blue color for each scan line. 6. The system of claim 1, wherein the interpolate engine uses green, and blue colors for the pixel values of each scan line to interpolate pixel values for a red color for each scan line. 7. A method, comprising: receiving, at a computing device, a plurality of scan lines each captured using only two of three different colors of lights from a scanner, wherein capturing the plurality of scan lines includes: illuminating a first scan line with a reference color and with a first of two alternate colors of the three different colors; and illuminating a second scan line with the reference color and with a second of the two alternate colors of the three different colors; interpolating pixel values for the second of the two alternate colors for the first scan line using pixel values of the reference color and pixel values of the first of the two alternate colors; interpolating pixel values for the first of the two alternate colors for the second scan line using pixel values of the reference color and pixel values of the second of the two alternate colors; and causing to display information relating to an image including the pixel values for the first of two alternate colors, the second of two alternate colors, and the reference color for the plurality of scan lines. 8. The method of claim 7, wherein the reference color is green, the first of the two alternate colors is red, and wherein the pixel values for the red color for the first scan line are weighted by an interpolation using a red, green, blue (RGB) color map. 9. The method of claim 7, wherein the reference color is green, the second of the two alternate colors is blue, and wherein the pixel values for the blue color for the second scan line are weighted by an interpolation using a red, green, blue (RGB) color map. 10. The method of claim 7, wherein the method includes determining a plurality of pixels adjacent to a pixel on an edge of the image by an edge map. 11. The method of claim 10, wherein the method includes interpolating the pixel value for the first or the second of the two alternate colors for the pixel on the edge of the image using the plurality of adjacent pixels. 12. A non-transitory computer readable medium storing instructions executable by a processing resource to cause a computing device to: receive, from a scanner, a plurality of scan lines each captured using a green color and a red or a blue color of lights from the scanner; interpolate, using the green color and the red color, pixel values for the blue color of a first scan line; interpolate, using the green color and the blue color, pixel values for the red color of a second scan line; and cause to display, via a user interface, information related to an image including the plurality of scan lines. 13. The medium of claim 12, comprising instructions to interpolate the pixel values for the blue color of the first scan line and the pixel values for the red color of the second scan line by bilinear interpolation. 14. The medium of claim 12, comprising instructions to interpolate the pixel values for the blue color of the first scan line and the pixel values for the red color of the second scan line by bicubic interpolation. 15. The medium of claim 12, comprising instructions to interpolate the pixel values for the blue color of the first scan line and the pixel values for the red color of the second scan line by spline based interpolation.
2,600
10,277
10,277
14,868,550
2,613
A non-transitory storage medium stores instructions executable by an information processing device including an operation device and a display. The instructions cause the information processing device to: display a first image; display a cropping frame for cropping of the first image when the operation device accepts a user operation for displaying the cropping frame in a state in which the first image is displayed; rotate the first image about a center of the cropping frame when the operation device accepts a user operation for rotating the first image in a state in which the first image and the cropping frame are displayed; and rotate the first image about a center of the first image when the operation device accepts a user operation for rotating the first image in a state in which the first image is displayed, and the cropping frame is not displayed.
1. A non-transitory storage medium storing a plurality of instructions executable by a processor of an information processing device, the information processing device comprising: an operation device configured to accept a user operation; and a display, the plurality of instructions, when executed by the processor, causing the information processing device to perform: displaying a first image on the display based on image data in a first display process; displaying a cropping frame on the display, in a second display process, when the operation device accepts the user operation for displaying the cropping frame in a state in which the first image is displayed on the display by the first display process, the cropping frame being for cropping of the first image; rotating the first image about a center of the cropping frame, in a first image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display in the first display process, and the cropping frame is displayed on the display by the second display process; and rotating the first image about a center of the first image, in a second image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display in the first display process, and the cropping frame is not displayed on the display by the second display process. 2. The non-transitory storage medium according to claim 1, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: determining, in a determination process, whether a portion of the first image is to be displayed within an entirety of an inside area of the cropping frame displayed on the display, when the first image displayed on the display is rotated by the first image rotation process; when it is determined in the determination process that a portion of the first image is not to be displayed within the entirety of the inside area, displaying a particular image in a third display process on a portion of the inside area on which the portion of the first image is not displayed. 3. The non-transitory storage medium according to claim 1, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the second display process, displaying a plurality of cropping frames each as the cropping frame in the state in which the first image is displayed on the display by the first display process; and in the first image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display by the first display process, and the plurality of cropping frames are displayed on the display by the second display process, rotating a portion of the first image, which portion is displayed on an inside area of at least one cropping frame of the plurality of cropping frames, about a center of the at least one cropping frame. 4. The non-transitory storage medium according to claim 3, wherein the first image is divided into a plurality of partial images, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the second display process, displaying the plurality of cropping frames respectively on the plurality of partial images; and in the first display process, displaying each of the plurality of partial images on an entirety of an inside area of a corresponding one of the plurality of cropping frames and an outside area located outside the corresponding one of the plurality of cropping frames. 5. The non-transitory storage medium according to claim 1, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: when the operation device accepts the user operation for cropping the first image in a state in which the cropping frame is displayed on the display in the second display process, in the first display process, displaying the first image on the display in a state in which the first image is cropped based on the cropping frame; and rotating the first image displayed on the display, in the first image rotation process, in a state in which at least a portion of the first image is displayed on an inside area of the cropping frame and an outside area located outside the cropping frame. 6. The non-transitory storage medium according to claim 1, wherein the operation device comprises a first operation element displayed on the display; and a second operation element different from the first operation element and displayed on the display, wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the first image rotation process, perform: rotating the first image about the center of the cropping frame by a first angle when the first operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process; and rotating the first image about the center of the cropping frame by a second angle less than the first angle, when the second operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process, and wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the second image rotation process, perform: rotating the first image about the center of the first image by the first angle when the first operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process; and rotating the first image about the center of the first image by the second angle when the second operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process. 7. The non-transitory storage medium according to claim 6, wherein the second operation element comprises a slider movable on the display in a first direction, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the first image rotation process, rotating the first image about the center of the cropping frame by the second angle when the slider is moved by a sliding operation as the user operation in the state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process; and in the second image rotation process, rotating the first image about the center of the first image by the second angle when the slider is moved by the sliding operation in the state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process. 8. The non-transitory storage medium according to claim 6, wherein the first angle is 90 degrees. 9. The non-transitory storage medium according to claim 1, wherein the cropping frame is of a rectangular shape, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the first image rotation process, rotating the first image about a position spaced equally from vertices of respective four corners of the cropping frame, as the center of the cropping frame, when the operation device accepts the user operation for rotating the first image. 10. The non-transitory storage medium according to claim 1, wherein the first image displayed on the display in the first display process is of a rectangular shape, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the second image rotation process, rotating the first image about a position spaced equally from vertices of respective four corners of the first image, as the center of the first image, when the operation device accepts the user operation for rotating the first image. 11. An information processing device, comprising: an operation device configured to accept a user operation; a display; and a controller configured to execute: displaying a first image on the display based on image data in a first display process; displaying a cropping frame on the display, in a second display process, when the operation device accepts the user operation for displaying the cropping frame in a state in which the first image is displayed on the display by the first display process, the cropping frame being for cropping of the first image; rotating the first image about a center of the cropping frame, in a first image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process; and rotating the first image about a center of the first image, in a second image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process. 12. A non-transitory storage medium storing a plurality of instructions executable by a processor of an information processing device, the information processing device comprising: an operation device configured to accept a user operation; and a display, the plurality of instructions, when executed by the processor, causing the information processing device to perform: displaying (i) a first image based on image data, (ii) a first operation element, and (iii) a second operation element different from the first operation element, on the display in a display process; rotating the first image displayed on the display by the display process, by a first angle in a first-angle rotation process when the operation device accepts the user operation for the first operation element; and rotating the first image displayed on the display by the display process, by a second angle less than the first angle, in a second-angle rotation process, when the operation device accepts the user operation for the second operation element. 13. The non-transitory storage medium according to claim 12, wherein the second operation element comprises a slider movable on the display in a first direction, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the second-angle rotation process, rotating the first image by the second angle when the sliding operation as the user operation is performed for the slider. 14. The non-transitory storage medium according to claim 13, wherein the slider is configured to be moved in the first direction when a sliding operation in the first direction is performed on the display as the user operation. 15. The non-transitory storage medium according to claim 12, wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the display process, perform: displaying a third operation element on the display; and displaying the first image and a cropping frame for cropping the first image, on the display when the operation device accepts the user operation for the third operation element displayed on the display, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the first-angle rotation process, rotating both of the first image and the cropping frame by the first angle when the operation device accepts the user operation for the first operation element in a state in which the first image and the cropping frame are displayed on the display; and in the second-angle rotation process, rotating the first image by the second angle and not rotating the cropping frame when the operation device accepts the user operation for the second operation element in the state in which the first image and the cropping frame are displayed on the display. 16. The non-transitory storage medium according to claim 15, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the display process, displaying the first image on the display in a state in which the first image is cropped based on the cropping frame when the operation device accepts the user operation for the third operation element; and in the first-angle rotation process, rotating at least a portion of the first image in a state in which the at least the portion of the first image is displayed on an inside area of the cropping frame and an outside area located outside the cropping frame, when the operation device accepts the user operation for the first operation element in the state in which the first image is displayed on the display with the first image being cropped based on the cropping frame. 17. The non-transitory storage medium according to claim 15, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the display process, displaying the first image on the display in a state in which the first image is cropped based on the cropping frame when the operation device accepts the user operation for the third operation element; and in the second-angle rotation process, rotating at least a portion of the first image in a state in which the at least the portion of the first image is displayed on an inside area of the cropping frame and an outside area located outside the cropping frame, when the operation device accepts the user operation for the second operation element in the state in which the first image is displayed on the display with the first image being cropped based on the cropping frame. 18. The non-transitory storage medium according to claim 12, wherein the first angle is 90 degrees. 19. The non-transitory storage medium according to claim 12, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the display process, displaying an image based on scan data on the display as the first image. 20. The non-transitory storage medium according to claim 19, wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the second-angle rotation process, perform: rotating the first image displayed on the display by the second angle when a scan condition for the scan data is a first condition and when the operation device accepts the user operation for the second operation element; and rotating the first image displayed on the display by a third angle less than the second angle when the scan condition for the scan data is a second condition and when the operation device accepts the user operation for the second operation element. 21. An information processing device, comprising: an operation device configured to accept a user operation; a display; and a controller configured to execute: displaying a first image based on image data, a first operation element, and a second operation element different from the first operation element, on the display in a display process; rotating the first image displayed on the display by the display process, by a first angle in a first-angle rotation process when the operation device accepts the user operation for the first operation element; and rotating the first image displayed on the display by the display process, by a second angle less than the first angle in a second-angle rotation process when the operation device accepts the user operation for the second operation element.
A non-transitory storage medium stores instructions executable by an information processing device including an operation device and a display. The instructions cause the information processing device to: display a first image; display a cropping frame for cropping of the first image when the operation device accepts a user operation for displaying the cropping frame in a state in which the first image is displayed; rotate the first image about a center of the cropping frame when the operation device accepts a user operation for rotating the first image in a state in which the first image and the cropping frame are displayed; and rotate the first image about a center of the first image when the operation device accepts a user operation for rotating the first image in a state in which the first image is displayed, and the cropping frame is not displayed.1. A non-transitory storage medium storing a plurality of instructions executable by a processor of an information processing device, the information processing device comprising: an operation device configured to accept a user operation; and a display, the plurality of instructions, when executed by the processor, causing the information processing device to perform: displaying a first image on the display based on image data in a first display process; displaying a cropping frame on the display, in a second display process, when the operation device accepts the user operation for displaying the cropping frame in a state in which the first image is displayed on the display by the first display process, the cropping frame being for cropping of the first image; rotating the first image about a center of the cropping frame, in a first image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display in the first display process, and the cropping frame is displayed on the display by the second display process; and rotating the first image about a center of the first image, in a second image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display in the first display process, and the cropping frame is not displayed on the display by the second display process. 2. The non-transitory storage medium according to claim 1, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: determining, in a determination process, whether a portion of the first image is to be displayed within an entirety of an inside area of the cropping frame displayed on the display, when the first image displayed on the display is rotated by the first image rotation process; when it is determined in the determination process that a portion of the first image is not to be displayed within the entirety of the inside area, displaying a particular image in a third display process on a portion of the inside area on which the portion of the first image is not displayed. 3. The non-transitory storage medium according to claim 1, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the second display process, displaying a plurality of cropping frames each as the cropping frame in the state in which the first image is displayed on the display by the first display process; and in the first image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display by the first display process, and the plurality of cropping frames are displayed on the display by the second display process, rotating a portion of the first image, which portion is displayed on an inside area of at least one cropping frame of the plurality of cropping frames, about a center of the at least one cropping frame. 4. The non-transitory storage medium according to claim 3, wherein the first image is divided into a plurality of partial images, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the second display process, displaying the plurality of cropping frames respectively on the plurality of partial images; and in the first display process, displaying each of the plurality of partial images on an entirety of an inside area of a corresponding one of the plurality of cropping frames and an outside area located outside the corresponding one of the plurality of cropping frames. 5. The non-transitory storage medium according to claim 1, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: when the operation device accepts the user operation for cropping the first image in a state in which the cropping frame is displayed on the display in the second display process, in the first display process, displaying the first image on the display in a state in which the first image is cropped based on the cropping frame; and rotating the first image displayed on the display, in the first image rotation process, in a state in which at least a portion of the first image is displayed on an inside area of the cropping frame and an outside area located outside the cropping frame. 6. The non-transitory storage medium according to claim 1, wherein the operation device comprises a first operation element displayed on the display; and a second operation element different from the first operation element and displayed on the display, wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the first image rotation process, perform: rotating the first image about the center of the cropping frame by a first angle when the first operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process; and rotating the first image about the center of the cropping frame by a second angle less than the first angle, when the second operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process, and wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the second image rotation process, perform: rotating the first image about the center of the first image by the first angle when the first operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process; and rotating the first image about the center of the first image by the second angle when the second operation element accepts the user operation for rotating the first image in the state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process. 7. The non-transitory storage medium according to claim 6, wherein the second operation element comprises a slider movable on the display in a first direction, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the first image rotation process, rotating the first image about the center of the cropping frame by the second angle when the slider is moved by a sliding operation as the user operation in the state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process; and in the second image rotation process, rotating the first image about the center of the first image by the second angle when the slider is moved by the sliding operation in the state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process. 8. The non-transitory storage medium according to claim 6, wherein the first angle is 90 degrees. 9. The non-transitory storage medium according to claim 1, wherein the cropping frame is of a rectangular shape, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the first image rotation process, rotating the first image about a position spaced equally from vertices of respective four corners of the cropping frame, as the center of the cropping frame, when the operation device accepts the user operation for rotating the first image. 10. The non-transitory storage medium according to claim 1, wherein the first image displayed on the display in the first display process is of a rectangular shape, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the second image rotation process, rotating the first image about a position spaced equally from vertices of respective four corners of the first image, as the center of the first image, when the operation device accepts the user operation for rotating the first image. 11. An information processing device, comprising: an operation device configured to accept a user operation; a display; and a controller configured to execute: displaying a first image on the display based on image data in a first display process; displaying a cropping frame on the display, in a second display process, when the operation device accepts the user operation for displaying the cropping frame in a state in which the first image is displayed on the display by the first display process, the cropping frame being for cropping of the first image; rotating the first image about a center of the cropping frame, in a first image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display by the first display process, and the cropping frame is displayed on the display by the second display process; and rotating the first image about a center of the first image, in a second image rotation process, when the operation device accepts the user operation for rotating the first image in a state in which the first image is displayed on the display by the first display process, and the cropping frame is not displayed on the display by the second display process. 12. A non-transitory storage medium storing a plurality of instructions executable by a processor of an information processing device, the information processing device comprising: an operation device configured to accept a user operation; and a display, the plurality of instructions, when executed by the processor, causing the information processing device to perform: displaying (i) a first image based on image data, (ii) a first operation element, and (iii) a second operation element different from the first operation element, on the display in a display process; rotating the first image displayed on the display by the display process, by a first angle in a first-angle rotation process when the operation device accepts the user operation for the first operation element; and rotating the first image displayed on the display by the display process, by a second angle less than the first angle, in a second-angle rotation process, when the operation device accepts the user operation for the second operation element. 13. The non-transitory storage medium according to claim 12, wherein the second operation element comprises a slider movable on the display in a first direction, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the second-angle rotation process, rotating the first image by the second angle when the sliding operation as the user operation is performed for the slider. 14. The non-transitory storage medium according to claim 13, wherein the slider is configured to be moved in the first direction when a sliding operation in the first direction is performed on the display as the user operation. 15. The non-transitory storage medium according to claim 12, wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the display process, perform: displaying a third operation element on the display; and displaying the first image and a cropping frame for cropping the first image, on the display when the operation device accepts the user operation for the third operation element displayed on the display, and wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the first-angle rotation process, rotating both of the first image and the cropping frame by the first angle when the operation device accepts the user operation for the first operation element in a state in which the first image and the cropping frame are displayed on the display; and in the second-angle rotation process, rotating the first image by the second angle and not rotating the cropping frame when the operation device accepts the user operation for the second operation element in the state in which the first image and the cropping frame are displayed on the display. 16. The non-transitory storage medium according to claim 15, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the display process, displaying the first image on the display in a state in which the first image is cropped based on the cropping frame when the operation device accepts the user operation for the third operation element; and in the first-angle rotation process, rotating at least a portion of the first image in a state in which the at least the portion of the first image is displayed on an inside area of the cropping frame and an outside area located outside the cropping frame, when the operation device accepts the user operation for the first operation element in the state in which the first image is displayed on the display with the first image being cropped based on the cropping frame. 17. The non-transitory storage medium according to claim 15, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform: in the display process, displaying the first image on the display in a state in which the first image is cropped based on the cropping frame when the operation device accepts the user operation for the third operation element; and in the second-angle rotation process, rotating at least a portion of the first image in a state in which the at least the portion of the first image is displayed on an inside area of the cropping frame and an outside area located outside the cropping frame, when the operation device accepts the user operation for the second operation element in the state in which the first image is displayed on the display with the first image being cropped based on the cropping frame. 18. The non-transitory storage medium according to claim 12, wherein the first angle is 90 degrees. 19. The non-transitory storage medium according to claim 12, wherein when executed by the processor, the plurality of instructions cause the information processing device to perform, in the display process, displaying an image based on scan data on the display as the first image. 20. The non-transitory storage medium according to claim 19, wherein when executed by the processor, the plurality of instructions cause the information processing device to, in the second-angle rotation process, perform: rotating the first image displayed on the display by the second angle when a scan condition for the scan data is a first condition and when the operation device accepts the user operation for the second operation element; and rotating the first image displayed on the display by a third angle less than the second angle when the scan condition for the scan data is a second condition and when the operation device accepts the user operation for the second operation element. 21. An information processing device, comprising: an operation device configured to accept a user operation; a display; and a controller configured to execute: displaying a first image based on image data, a first operation element, and a second operation element different from the first operation element, on the display in a display process; rotating the first image displayed on the display by the display process, by a first angle in a first-angle rotation process when the operation device accepts the user operation for the first operation element; and rotating the first image displayed on the display by the display process, by a second angle less than the first angle in a second-angle rotation process when the operation device accepts the user operation for the second operation element.
2,600
10,278
10,278
15,866,075
2,613
In one embodiment, a method includes causing a building model for a modeled building to be presented on a client computer. The building model includes a three-dimensional scene. The three-dimensional scene includes an individual rendering of at least selected building components for the modeled building. The method further includes permitting a user of the client computer to perform a virtual walkthrough of the three-dimensional scene. In addition, the method includes receiving a user change to the three-dimensional scene via a graphical user interface (GUI) component. Furthermore, the method includes dynamically changing the building model in accordance with the user change. The dynamically changing includes individually modifying an appearance of at least one building component of the at least selected building components in the three-dimensional scene.
1. (canceled) 2. A method at a computer server of providing building product models and analytics, the method comprising: obtaining a computer file comprising a building model for a modeled building to be presented in a three-dimensional visualization on a client device, the building model comprising an initial building product; determining modeling properties for the initial building product and each of one or more alternative building products wherein: for the initial building product and each of the one or more alternative building products, the modeling properties indicate how the respective initial building product or alternative building product should appear when the building model is presented in the three-dimensional visualization on the client device, the modeling properties are based on one or more product attributes, wherein the one or more product attributes of the one or more alternative building products are different than the one or more product attributes of the initial building product, and the modeling properties are additionally based on one or more building-site attributes, the one or more building-site attributes comprising one or more attributes specific to a geographic location of the modeled building; causing the building model comprising the initial building product to be presented in the three-dimensional visualization on the client device via a graphical user interface (GUI) that allows a user to modify a view of the building model; in response to user input received via the GUI, causing the three-dimensional visualization presented on the client device to replace the initial building product in the building model with at least one of the one or more alternative building products; and changing an analytical model for the modeled building, wherein: the analytical model is used to provide an analytical visualization providing a comparison of selected building products relative to one or more building properties on the client device, and the selected building products are selected by the user from the initial building product and the one or more alternative building products. 3. The method of claim 2, wherein the computer file comprises a Building Information Modeling (BIM) file. 4. The method of claim 2, wherein the one or more building-site attributes comprise: sunlight direction at the geographic location, sunlight brightness at a geographic location, an orientation of the modeled building, or any combination thereof. 5. The method of claim 2, further comprising: obtaining geographic coordinates of the geographic location of the modeled building, and determining at least a part of the one or more building-site attributes based on the geographic coordinates. 6. The method of claim 2, wherein the analytical model relates to: a solar study, a shadow study, a wind study, a renewable-energy study, an acoustics study, a natural-ventilation study, an energy-model study, a daylight study, or any combination thereof. 7. The method of claim 2, wherein the one or more building properties comprise: energy usage, energy cost, capital cost, simple payback, heating capacity reduction, cooling capacity reduction, or or any combination thereof. 8. The method of claim 2, wherein the GUI allows a user to modify a view of the building model by selecting from among a plurality of different views. 9. The method of claim 8, wherein the plurality of different views comprises a plurality of virtual rooms. 10. The method of claim 2, wherein the analytical visualization comprises: a chart, a graph, an animation, a three-dimensional scene, or any combination thereof. 11. The method of claim 2, wherein the GUI further allows a user to adjust lighting in the three-dimensional visualization presented on the client device. 12. A computer server comprising: a communication interface configured to communicate with a client device; a memory; and a processor communicatively coupled with the communication interface and the memory and configured to: obtain a computer file comprising a building model for a modeled building to be presented in a three-dimensional visualization on the client device, the building model comprising an initial building product; determine modeling properties for the initial building product and each of one or more alternative building products wherein: for the initial building product and each of the one or more alternative building products, the modeling properties indicate how the respective initial building product or alternative building product should appear when the building model is presented in the three-dimensional visualization on the client device, the modeling properties are based on one or more product attributes, wherein the one or more product attributes of the one or more alternative building products are different than the one or more product attributes of the initial building product, and the modeling properties are additionally based on one or more building-site attributes, the one or more building-site attributes comprising one or more attributes specific to a geographic location of the modeled building; communicate with the client device, via the communication interface, to: cause the building model comprising the initial building product to be presented in the three-dimensional visualization on the client device via a graphical user interface (GUI) that allows a user to modify a view of the building model; and in response to user input received via the GUI, cause the three-dimensional visualization presented on the client device to replace the initial building product in the building model with at least one of the one or more alternative building products; change an analytical model for the modeled building; and communicate with the client device, via the communication interface, to cause the client device to provide an analytical visualization based at least in part on the changed analytical model, wherein the analytical visualization providing a comparison of selected building products relative to one or more building properties, and the selected building products are selected from the initial building product and the one or more alternative building products. 13. The computer server of claim 12, wherein the computer file comprises a Building Information Modeling (BIM) file. 14. The computer server of claim 12, wherein the one or more building-site attributes comprise: sunlight direction at the geographic location, sunlight brightness at a geographic location, an orientation of the modeled building, or any combination thereof. 15. The computer server of claim 12, wherein the processor is further configured to: obtain geographic coordinates of the geographic location of the modeled building; and determine at least a part of the one or more building-site attributes based on the geographic coordinates. 16. The computer server of claim 12, wherein the analytical model relates to: a solar study, a shadow study, a wind study, a renewable-energy study, an acoustics study, a natural-ventilation study, an energy-model study, a daylight study, or any combination thereof. 17. The computer server of claim 12, wherein the one or more building properties comprise: energy usage, energy cost, capital cost, simple payback, heating capacity reduction, cooling capacity reduction, or or any combination thereof. 18. The computer server of claim 12, wherein the processor is further configured to communicate with the client device, via the communication interface, to cause the GUI to enable a user to modify a view of the building model by selecting from among a plurality of different views. 19. The computer server of claim 18, wherein the plurality of different views comprises a plurality of virtual rooms. 20. The computer server of claim 12, wherein the processor is configured to communicate with the client device, via the communication interface, to cause the client device to provide the analytical visualization at least in part by causing the client device to display: a chart, a graph, an animation, a three-dimensional scene, or any combination thereof. 21. The computer server of claim 12, wherein the processor is further configured to communicate with the client device, via the communication interface, to cause the GUI to allow a user to adjust lighting in the three-dimensional visualization presented on the client device.
In one embodiment, a method includes causing a building model for a modeled building to be presented on a client computer. The building model includes a three-dimensional scene. The three-dimensional scene includes an individual rendering of at least selected building components for the modeled building. The method further includes permitting a user of the client computer to perform a virtual walkthrough of the three-dimensional scene. In addition, the method includes receiving a user change to the three-dimensional scene via a graphical user interface (GUI) component. Furthermore, the method includes dynamically changing the building model in accordance with the user change. The dynamically changing includes individually modifying an appearance of at least one building component of the at least selected building components in the three-dimensional scene.1. (canceled) 2. A method at a computer server of providing building product models and analytics, the method comprising: obtaining a computer file comprising a building model for a modeled building to be presented in a three-dimensional visualization on a client device, the building model comprising an initial building product; determining modeling properties for the initial building product and each of one or more alternative building products wherein: for the initial building product and each of the one or more alternative building products, the modeling properties indicate how the respective initial building product or alternative building product should appear when the building model is presented in the three-dimensional visualization on the client device, the modeling properties are based on one or more product attributes, wherein the one or more product attributes of the one or more alternative building products are different than the one or more product attributes of the initial building product, and the modeling properties are additionally based on one or more building-site attributes, the one or more building-site attributes comprising one or more attributes specific to a geographic location of the modeled building; causing the building model comprising the initial building product to be presented in the three-dimensional visualization on the client device via a graphical user interface (GUI) that allows a user to modify a view of the building model; in response to user input received via the GUI, causing the three-dimensional visualization presented on the client device to replace the initial building product in the building model with at least one of the one or more alternative building products; and changing an analytical model for the modeled building, wherein: the analytical model is used to provide an analytical visualization providing a comparison of selected building products relative to one or more building properties on the client device, and the selected building products are selected by the user from the initial building product and the one or more alternative building products. 3. The method of claim 2, wherein the computer file comprises a Building Information Modeling (BIM) file. 4. The method of claim 2, wherein the one or more building-site attributes comprise: sunlight direction at the geographic location, sunlight brightness at a geographic location, an orientation of the modeled building, or any combination thereof. 5. The method of claim 2, further comprising: obtaining geographic coordinates of the geographic location of the modeled building, and determining at least a part of the one or more building-site attributes based on the geographic coordinates. 6. The method of claim 2, wherein the analytical model relates to: a solar study, a shadow study, a wind study, a renewable-energy study, an acoustics study, a natural-ventilation study, an energy-model study, a daylight study, or any combination thereof. 7. The method of claim 2, wherein the one or more building properties comprise: energy usage, energy cost, capital cost, simple payback, heating capacity reduction, cooling capacity reduction, or or any combination thereof. 8. The method of claim 2, wherein the GUI allows a user to modify a view of the building model by selecting from among a plurality of different views. 9. The method of claim 8, wherein the plurality of different views comprises a plurality of virtual rooms. 10. The method of claim 2, wherein the analytical visualization comprises: a chart, a graph, an animation, a three-dimensional scene, or any combination thereof. 11. The method of claim 2, wherein the GUI further allows a user to adjust lighting in the three-dimensional visualization presented on the client device. 12. A computer server comprising: a communication interface configured to communicate with a client device; a memory; and a processor communicatively coupled with the communication interface and the memory and configured to: obtain a computer file comprising a building model for a modeled building to be presented in a three-dimensional visualization on the client device, the building model comprising an initial building product; determine modeling properties for the initial building product and each of one or more alternative building products wherein: for the initial building product and each of the one or more alternative building products, the modeling properties indicate how the respective initial building product or alternative building product should appear when the building model is presented in the three-dimensional visualization on the client device, the modeling properties are based on one or more product attributes, wherein the one or more product attributes of the one or more alternative building products are different than the one or more product attributes of the initial building product, and the modeling properties are additionally based on one or more building-site attributes, the one or more building-site attributes comprising one or more attributes specific to a geographic location of the modeled building; communicate with the client device, via the communication interface, to: cause the building model comprising the initial building product to be presented in the three-dimensional visualization on the client device via a graphical user interface (GUI) that allows a user to modify a view of the building model; and in response to user input received via the GUI, cause the three-dimensional visualization presented on the client device to replace the initial building product in the building model with at least one of the one or more alternative building products; change an analytical model for the modeled building; and communicate with the client device, via the communication interface, to cause the client device to provide an analytical visualization based at least in part on the changed analytical model, wherein the analytical visualization providing a comparison of selected building products relative to one or more building properties, and the selected building products are selected from the initial building product and the one or more alternative building products. 13. The computer server of claim 12, wherein the computer file comprises a Building Information Modeling (BIM) file. 14. The computer server of claim 12, wherein the one or more building-site attributes comprise: sunlight direction at the geographic location, sunlight brightness at a geographic location, an orientation of the modeled building, or any combination thereof. 15. The computer server of claim 12, wherein the processor is further configured to: obtain geographic coordinates of the geographic location of the modeled building; and determine at least a part of the one or more building-site attributes based on the geographic coordinates. 16. The computer server of claim 12, wherein the analytical model relates to: a solar study, a shadow study, a wind study, a renewable-energy study, an acoustics study, a natural-ventilation study, an energy-model study, a daylight study, or any combination thereof. 17. The computer server of claim 12, wherein the one or more building properties comprise: energy usage, energy cost, capital cost, simple payback, heating capacity reduction, cooling capacity reduction, or or any combination thereof. 18. The computer server of claim 12, wherein the processor is further configured to communicate with the client device, via the communication interface, to cause the GUI to enable a user to modify a view of the building model by selecting from among a plurality of different views. 19. The computer server of claim 18, wherein the plurality of different views comprises a plurality of virtual rooms. 20. The computer server of claim 12, wherein the processor is configured to communicate with the client device, via the communication interface, to cause the client device to provide the analytical visualization at least in part by causing the client device to display: a chart, a graph, an animation, a three-dimensional scene, or any combination thereof. 21. The computer server of claim 12, wherein the processor is further configured to communicate with the client device, via the communication interface, to cause the GUI to allow a user to adjust lighting in the three-dimensional visualization presented on the client device.
2,600
10,279
10,279
14,707,764
2,683
A method of controlling a vehicle to increase pedestrian protection comprises monitoring data from a plurality of sensors with an electronic control unit and determining a probability that a detected object proximate to the vehicle is a pedestrian. A first warning is provided to the pedestrian with at least one of an audible and visual warning. A second warning is provided to the pedestrian with at least one of an audible and visual warning when the pedestrian moves closer to the vehicle. The second warning has increased intensity compared to the first warning. Finally, at least one vehicle action is provided to mitigate an accident when the pedestrian continues in proximity to the vehicle following the second warning.
1. A method of controlling a vehicle to increase pedestrian protection comprising: monitoring data from a plurality of sensors with an electronic control unit; determining a probability that a detected object proximate to the vehicle is a pedestrian; providing a first warning to the pedestrian with at least one of an audible and visual warning; providing a second warning to the pedestrian with at least one of an audible and visual warning when the pedestrian moves closer to the vehicle, wherein the second warning has increased intensity compared to the first warning; and providing at least vehicle action to mitigate an accident when the pedestrian continues in proximity to the vehicle following the second warning. 2. The method of claim 1, further comprising determining a probability that an impact with the detected object is likely to occur. 3. The method of claim 2, wherein the intensity of the first warning, and the second warning are increased as the probability of impact increases. 4. The method of claim 1, further includes the vehicle implementing at least one safety precaution of; pre-tensioning seat belts, pre-charging an airbag restraint, pre-charging a head support system, pre-charging the brakes, deploying a bumper to an extended collision position, lowering a vehicle bumper braking the vehicle and steering the vehicle to avoid the detected object. 5. The method of claim 1, further comprising directing the first warning and the second warning toward the pedestrian. 6. A pedestrian protection system for a vehicle comprising: a plurality of sensors to monitor an area proximate to the vehicle; an ECU connected to the plurality of sensors to determine if an object detected by the sensors is a pedestrian, wherein the electronic control unit is configured with instructions for; monitoring data from a plurality of sensors with an electronic control unit; determining a probability that a detected object proximate to the vehicle is a pedestrian; providing a first warning to the pedestrian with at least one of an audible and visual warning; providing a second warning to the pedestrian with at least one of an audible and visual warning when the pedestrian moves closer to the vehicle, wherein the second warning has increased intensity compared to the first warning; and providing at least vehicle action to mitigate an accident when the pedestrian continues in proximity to the vehicle following the second warning. 7. The pedestrian protection system of claim 6, wherein the electronic control unit is further configured with instructions for determining a probability that an impact with the detected object is likely to occur. 8. The pedestrian protection system of claim 7, herein the intensity of the first warning, and the second warning are increased as the probability of impact increases. 9. The pedestrian protection system of claim 6, wherein the electronic control unit is further configured with instructions implementing at least one safety precaution of; pre-tensioning seat belts, pre-charging an airbag restraint, pre-charging a head support system, pre-charging the brakes, deploying a bumper to an extended collision position, lowering a vehicle bumper braking the vehicle and steering the vehicle to avoid the detected object. 10. The pedestrian protection system of claim 6, wherein the first warning and the second warning are directed toward the pedestrian.
A method of controlling a vehicle to increase pedestrian protection comprises monitoring data from a plurality of sensors with an electronic control unit and determining a probability that a detected object proximate to the vehicle is a pedestrian. A first warning is provided to the pedestrian with at least one of an audible and visual warning. A second warning is provided to the pedestrian with at least one of an audible and visual warning when the pedestrian moves closer to the vehicle. The second warning has increased intensity compared to the first warning. Finally, at least one vehicle action is provided to mitigate an accident when the pedestrian continues in proximity to the vehicle following the second warning.1. A method of controlling a vehicle to increase pedestrian protection comprising: monitoring data from a plurality of sensors with an electronic control unit; determining a probability that a detected object proximate to the vehicle is a pedestrian; providing a first warning to the pedestrian with at least one of an audible and visual warning; providing a second warning to the pedestrian with at least one of an audible and visual warning when the pedestrian moves closer to the vehicle, wherein the second warning has increased intensity compared to the first warning; and providing at least vehicle action to mitigate an accident when the pedestrian continues in proximity to the vehicle following the second warning. 2. The method of claim 1, further comprising determining a probability that an impact with the detected object is likely to occur. 3. The method of claim 2, wherein the intensity of the first warning, and the second warning are increased as the probability of impact increases. 4. The method of claim 1, further includes the vehicle implementing at least one safety precaution of; pre-tensioning seat belts, pre-charging an airbag restraint, pre-charging a head support system, pre-charging the brakes, deploying a bumper to an extended collision position, lowering a vehicle bumper braking the vehicle and steering the vehicle to avoid the detected object. 5. The method of claim 1, further comprising directing the first warning and the second warning toward the pedestrian. 6. A pedestrian protection system for a vehicle comprising: a plurality of sensors to monitor an area proximate to the vehicle; an ECU connected to the plurality of sensors to determine if an object detected by the sensors is a pedestrian, wherein the electronic control unit is configured with instructions for; monitoring data from a plurality of sensors with an electronic control unit; determining a probability that a detected object proximate to the vehicle is a pedestrian; providing a first warning to the pedestrian with at least one of an audible and visual warning; providing a second warning to the pedestrian with at least one of an audible and visual warning when the pedestrian moves closer to the vehicle, wherein the second warning has increased intensity compared to the first warning; and providing at least vehicle action to mitigate an accident when the pedestrian continues in proximity to the vehicle following the second warning. 7. The pedestrian protection system of claim 6, wherein the electronic control unit is further configured with instructions for determining a probability that an impact with the detected object is likely to occur. 8. The pedestrian protection system of claim 7, herein the intensity of the first warning, and the second warning are increased as the probability of impact increases. 9. The pedestrian protection system of claim 6, wherein the electronic control unit is further configured with instructions implementing at least one safety precaution of; pre-tensioning seat belts, pre-charging an airbag restraint, pre-charging a head support system, pre-charging the brakes, deploying a bumper to an extended collision position, lowering a vehicle bumper braking the vehicle and steering the vehicle to avoid the detected object. 10. The pedestrian protection system of claim 6, wherein the first warning and the second warning are directed toward the pedestrian.
2,600
10,280
10,280
15,471,602
2,653
A loudspeaker includes a housing, a movable diaphragm within the housing for creating sound waves, and a grill secured to the loudspeaker which covers the diaphragm. At least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees.
1. A loudspeaker, comprising: a housing; a movable diaphragm within the housing for creating sound waves; and a grill secured to the loudspeaker which covers the diaphragm, wherein at least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 2. The loudspeaker of claim 1, wherein substantially all of the perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 3. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle greater than about 100 degrees. 4. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 110 degrees and about 225 degrees. 5. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 120 degrees and about 180 degrees. 6. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 130 degrees and about than about 170 degrees. 7. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 140 degrees and about 160 degrees. 8. A speaker grill, comprising: a piece of material which is substantially acoustically transparent, the piece of material being securable to a speaker housing and able to cover a movable diaphragm, within the housing, for creating sound waves, wherein at least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 9. The speaker grill of claim 8, wherein substantially all of the perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 10. The speaker grill of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle greater than about 100 degrees. 11. The speaker grill of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 110 degrees and about 225 degrees. 12. The loudspeaker of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 120 degrees and about 180 degrees. 13. The loudspeaker of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 130 degrees and about 170 degrees. 14. The loudspeaker of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 140 degrees and about 160 degrees. 15. A loudspeaker, comprising: a housing; a movable diaphragm within the housing for creating sound waves; and a grill secured to the loudspeaker which covers the diaphragm, wherein at least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees, and wherein a portion of the housing and the at least a substantial portion of the peripheral region of the grill are located adjacent to a flat surface to which the loudspeaker is designed to be mounted when the loudspeaker is mounted to the surface. 16. The loudspeaker of claim 15, wherein substantially all of the perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 17. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle greater than about 100 degrees. 18. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle between about 110 degrees and about 225 degrees. 19. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle between about 120 degrees and about 180 degrees. 20. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle between about 130 degrees and about 170 degrees.
A loudspeaker includes a housing, a movable diaphragm within the housing for creating sound waves, and a grill secured to the loudspeaker which covers the diaphragm. At least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees.1. A loudspeaker, comprising: a housing; a movable diaphragm within the housing for creating sound waves; and a grill secured to the loudspeaker which covers the diaphragm, wherein at least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 2. The loudspeaker of claim 1, wherein substantially all of the perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 3. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle greater than about 100 degrees. 4. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 110 degrees and about 225 degrees. 5. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 120 degrees and about 180 degrees. 6. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 130 degrees and about than about 170 degrees. 7. The loudspeaker of claim 1, wherein the peripheral region of the grill is folded back on itself by an angle between about 140 degrees and about 160 degrees. 8. A speaker grill, comprising: a piece of material which is substantially acoustically transparent, the piece of material being securable to a speaker housing and able to cover a movable diaphragm, within the housing, for creating sound waves, wherein at least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 9. The speaker grill of claim 8, wherein substantially all of the perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 10. The speaker grill of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle greater than about 100 degrees. 11. The speaker grill of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 110 degrees and about 225 degrees. 12. The loudspeaker of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 120 degrees and about 180 degrees. 13. The loudspeaker of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 130 degrees and about 170 degrees. 14. The loudspeaker of claim 8, wherein the peripheral region of the grill is folded back on itself by an angle between about 140 degrees and about 160 degrees. 15. A loudspeaker, comprising: a housing; a movable diaphragm within the housing for creating sound waves; and a grill secured to the loudspeaker which covers the diaphragm, wherein at least a substantial portion of a perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees, and wherein a portion of the housing and the at least a substantial portion of the peripheral region of the grill are located adjacent to a flat surface to which the loudspeaker is designed to be mounted when the loudspeaker is mounted to the surface. 16. The loudspeaker of claim 15, wherein substantially all of the perimeter of the grill has a peripheral region which is folded back on itself by an angle greater than about 95 degrees. 17. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle greater than about 100 degrees. 18. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle between about 110 degrees and about 225 degrees. 19. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle between about 120 degrees and about 180 degrees. 20. The loudspeaker of claim 15, wherein the peripheral region of the grill is folded back on itself by an angle between about 130 degrees and about 170 degrees.
2,600
10,281
10,281
14,001,037
2,619
The present invention relates to a method and system for providing a face adjustment image, the method comprising the steps of: (a) generating a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and (b) displaying a predicted facial image on a screen by transforming soft skin tissues of the face according to the skeletal change in the cephalometric image. According to the present invention, the change in the soft skin tissues and the predicted facial image are displayed on a screen of a computer, a terminal or the like based on the skeletal change in cranium, teeth, prosthesis or the like supporting the soft skin tissues. Therefore, the change in the soft skin tissues can be predicted, thereby increasing the accuracy of a face correction operation, making it more accurate and convenient to plan the operation, and enhancing communication between the patient and medical staff.
1. A method for providing a face adjustment image, comprising the steps of: (a) generating a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and (b) displaying a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image. 2. The method of claim 1, wherein, in the step (a), a plurality of first alignment points arranged on the facial image to superimpose the facial image and the cephalometric image are matched with a plurality of second alignment points arranged on the positions corresponding to those of the first alignment points on the outline of the cephalometric image, which is formed by the soft skin tissues, so that the facial image and the cephalometric image are superimposed. 3. The method of claim 2, wherein the step (a) comprises the steps of: (a1) receiving inputs for the first alignment points and the second alignment points; and (a2) displaying the facial image being superimposed with the cephalometric image on the screen. 4. The method of claim 3, wherein the step (a) further comprises the step of (a3) adjusting the size and orientation of the cephalometric image to the same as those of the facial image. 5. The method of claim 4, wherein the first alignment points comprise a pair of facial alignment points; the second alignment points comprise a pair of outline alignment points located on the positions corresponding to those of the facial alignment points; and the step (a3) is performed by matching the size and orientation of a first vector formed by the facial alignment points and a second vector formed by the outline alignment points. 6. The method of claim 3, wherein the step (a) further comprises the step of (a4) displaying matching alignment lines on the screen; and wherein the matching alignment lines are respectively formed on the first alignment points before the step (a1), and their orientations and lengths may be adjusted to achieve one-to-one correspondence of the first alignment points to the second alignment points. 7. The method of claim 2, wherein the step (a) generates the matched image by superimposing the cephalometric image on the cross section of the facial image divided by a matching reference line arranged on the facial image. 8. The method of claim 7, wherein the matching reference line is a vertical line dividing the facial image to the left and right sides, and the cephalometric image is a lateral image obtained by photographing the head of the patient on the lateral side perpendicular to the front side of the face. 9. The method of claim 1, wherein the step (a) displays one lateral side of the facial image divided by the cephalometric image transparently on the screen so that the cephalometric image is displayed on the screen. 10. The method of claim 1, wherein the step (b) comprises the steps of: (b1) determining the change in the soft skin tissues corresponding to the skeletal change in the head; and (b2) displaying the predicted facial image on the screen based on the change in the soft skin tissues. 11. The method of claim 10, wherein the skeletal change in the head results from at least one of tooth migration and cranial transformation. 12. The method of claim 10, wherein the step (b2) selectively displays contour lines for showing the change in the soft skin tissues on the predicted facial image. 13. The method of claim 1, wherein the step (b) displays the predicted facial images before and after a simulation on the screen simultaneously or sequentially. 14. A system for providing a face adjustment image, comprising: a matching module to generate a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and an image display module to display a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image. 15. A computer system for providing a face adjustment image, comprising: a first image acquisition unit to acquire a two-dimensional cephalometric image having a cranium image of a patient whose face is to be corrected; a second image acquisition unit to acquire a three-dimensional facial image by photographing the face of the patient; an image storage module to receive and store the cephalometric image and the three-dimensional facial image; a matching module to generate a matched image by superimposing the cephalometric image and the facial image stored in the image storage module; and an image display module to display a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image. 16. A computer-readable recording medium having stored thereon a program for providing a face adjustment image, which enables a computer to function as: matching means to generate a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and image display means to display a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image.
The present invention relates to a method and system for providing a face adjustment image, the method comprising the steps of: (a) generating a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and (b) displaying a predicted facial image on a screen by transforming soft skin tissues of the face according to the skeletal change in the cephalometric image. According to the present invention, the change in the soft skin tissues and the predicted facial image are displayed on a screen of a computer, a terminal or the like based on the skeletal change in cranium, teeth, prosthesis or the like supporting the soft skin tissues. Therefore, the change in the soft skin tissues can be predicted, thereby increasing the accuracy of a face correction operation, making it more accurate and convenient to plan the operation, and enhancing communication between the patient and medical staff.1. A method for providing a face adjustment image, comprising the steps of: (a) generating a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and (b) displaying a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image. 2. The method of claim 1, wherein, in the step (a), a plurality of first alignment points arranged on the facial image to superimpose the facial image and the cephalometric image are matched with a plurality of second alignment points arranged on the positions corresponding to those of the first alignment points on the outline of the cephalometric image, which is formed by the soft skin tissues, so that the facial image and the cephalometric image are superimposed. 3. The method of claim 2, wherein the step (a) comprises the steps of: (a1) receiving inputs for the first alignment points and the second alignment points; and (a2) displaying the facial image being superimposed with the cephalometric image on the screen. 4. The method of claim 3, wherein the step (a) further comprises the step of (a3) adjusting the size and orientation of the cephalometric image to the same as those of the facial image. 5. The method of claim 4, wherein the first alignment points comprise a pair of facial alignment points; the second alignment points comprise a pair of outline alignment points located on the positions corresponding to those of the facial alignment points; and the step (a3) is performed by matching the size and orientation of a first vector formed by the facial alignment points and a second vector formed by the outline alignment points. 6. The method of claim 3, wherein the step (a) further comprises the step of (a4) displaying matching alignment lines on the screen; and wherein the matching alignment lines are respectively formed on the first alignment points before the step (a1), and their orientations and lengths may be adjusted to achieve one-to-one correspondence of the first alignment points to the second alignment points. 7. The method of claim 2, wherein the step (a) generates the matched image by superimposing the cephalometric image on the cross section of the facial image divided by a matching reference line arranged on the facial image. 8. The method of claim 7, wherein the matching reference line is a vertical line dividing the facial image to the left and right sides, and the cephalometric image is a lateral image obtained by photographing the head of the patient on the lateral side perpendicular to the front side of the face. 9. The method of claim 1, wherein the step (a) displays one lateral side of the facial image divided by the cephalometric image transparently on the screen so that the cephalometric image is displayed on the screen. 10. The method of claim 1, wherein the step (b) comprises the steps of: (b1) determining the change in the soft skin tissues corresponding to the skeletal change in the head; and (b2) displaying the predicted facial image on the screen based on the change in the soft skin tissues. 11. The method of claim 10, wherein the skeletal change in the head results from at least one of tooth migration and cranial transformation. 12. The method of claim 10, wherein the step (b2) selectively displays contour lines for showing the change in the soft skin tissues on the predicted facial image. 13. The method of claim 1, wherein the step (b) displays the predicted facial images before and after a simulation on the screen simultaneously or sequentially. 14. A system for providing a face adjustment image, comprising: a matching module to generate a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and an image display module to display a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image. 15. A computer system for providing a face adjustment image, comprising: a first image acquisition unit to acquire a two-dimensional cephalometric image having a cranium image of a patient whose face is to be corrected; a second image acquisition unit to acquire a three-dimensional facial image by photographing the face of the patient; an image storage module to receive and store the cephalometric image and the three-dimensional facial image; a matching module to generate a matched image by superimposing the cephalometric image and the facial image stored in the image storage module; and an image display module to display a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image. 16. A computer-readable recording medium having stored thereon a program for providing a face adjustment image, which enables a computer to function as: matching means to generate a matched image by superimposing a cephalometric image having a cranium image of a patient whose face is to be corrected with a three-dimensional facial image of the patient; and image display means to display a predicted facial image on a screen based on the transformation of soft skin tissues in the matched image.
2,600
10,282
10,282
15,004,098
2,658
A server prevents duplicate posts within a question and answer forum. The server may compare the user question vector to each of the plurality of corpus question vectors to determine the closest match between the user question vector and the corpus question vectors to obtain an identified question and answer row, and determine if the identified Q and A row has a last answer that has a corresponding confidence to the question of the identified Q and A row that exceeds a confidence threshold. Responsive to a positive determination, the server may determine if the user question is similar to a question in the identified Q and A row, and if so the server may determine that the last answer is similar to any answer in the identified Q and A row that is not the last answer, and in response, block the submission of the user question.
1-7. (canceled) 8. A computer program product for preventing duplicate posts within a Q and A forum, the computer program product comprising: a computer usable medium having computer usable program code embodied therewith, the computer program product comprising: computer usable program code configured to receive a user question from a user at the Q and A forum; computer usable program code configured to apply natural language processing to the user question to form a user question vector; computer usable program code configured to apply natural language processing to each question in a question and answer (Q and A) corpus to form a plurality of corpus question vectors, wherein each question is in a row having at least the each question; computer usable program code configured to compare the user question vector to each of the plurality of corpus question vectors to determine a closest match between the user question vector and the corpus question vectors to obtain an identified question and answer (Q and A) row; computer usable program code configured to determine if the identified Q and A row has a last answer that has a corresponding confidence to the question of the identified Q and A row that exceeds a confidence threshold and in response, computer usable program code configured to determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold, and if so, determine that at least one pairing of the last answer to another answer in the identified Q and A row has a similarity exceeding a last threshold to any answer in the identified Q and A row that is not the last answer, and in response, block the submission of the user question as a distinct question and directing the user to at least one answer of the identified Q and A row; and if not, determine that the last answer is similar above a last threshold to any answer in the identified Q and A row that is not the last answer, and in response, the computer usable program code configured to append the user question to the identified Q and A row. 9. The computer program product of claim 8, wherein computer usable program code configured to determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold comprises: computer usable program code configured to iterate over all questions in the identified Q and A row and to compare a user question vector to each of the question vectors to each of all questions to determine a question in the identified Q and A row having the highest similarity to the user question; and computer usable program code configured to determine if the question in the identified Q and A row having the highest similarity to the user question exceeds the question similarity threshold. 10. The computer program product of claim 8, wherein computer usable program code configured to determine if the identified Q and A row has the last answer that has the corresponding confidence to the question comprises computer usable program code configured to sum at least one vote corresponding to the last answer. 11. The computer program product of claim 10, wherein the confidence threshold is at least one vote. 12. The computer program product of claim 8, wherein computer usable program code configured to apply natural language processing to the user question further comprises computer usable program code configured to reduce each word to a word root to form the user question vector. 13. The computer program product of claim 12, wherein computer usable program code configured to reduce each word to a word root further comprises replacing the word root with a preferred synonym. 14. A data processing system comprising: a bus; a computer readable tangible storage device connected to the bus, wherein computer usable code is located in the computer readable tangible storage device; a communication unit connected to the bus; and a processing unit connected to the bus, wherein the processor executes the computer usable code for preventing duplicate posts within a Q and A forum, wherein the processor executes the computer usable program code to receive a user question from a user at the Q and A forum; apply natural language processing to the user question to form a user question vector; apply natural language processing to each question in a question and answer (Q and A) corpus to form a plurality of corpus question vectors, wherein each question is in a row having at least the each question; compare the user question vector to each of the plurality of corpus question vectors to determine a closest match between the user question vector and the corpus question vectors to obtain an identified question and answer (Q and A) row; determine if the identified Q and A row has a last answer that has a corresponding confidence to the question of the identified Q and A row that exceeds a confidence threshold and in response, determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold, and if so, determine that at least one pairing of the last answer to another answer in the identified Q and A row has a similarity exceeding a last threshold, and in response, block the submission of the user question as a distinct question and direct the user to at least one answer of the identified Q and A row; and if not, post the user question as an unanswered question. 15. The data processing system of claim 14, wherein in executing the computer usable program code to determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold the processor further executes the computer usable program code to iterate over all questions in the identified Q and A row and comparing a user question vector to each of the question vectors to each of all questions to determine a question in the identified Q and A row having the highest similarity to the user question; and determine if the question in the identified Q and A row having the highest similarity to the user question exceeds the question similarity threshold. 16. The data processing system of claim 14, wherein in executing the computer usable program code to determine if the identified Q and A row has the last answer that has the corresponding confidence to the question, the processor further executes the computer usable program code to sum at least one vote corresponding to the last answer. 17. The data processing system of claim 16, wherein the confidence threshold is at least one vote. 18. The data processing system of claim 14, wherein the post the user question as an unanswered question comprises storing the user question to the Q and A corpus as a new row to the Q and A corpus. 19. The data processing system of claim 14, wherein in executing the computer usable program code to apply natural language processing to the user question the processor further executes the computer usable program code to reduce each word to a word root to form the user question vector. 20. The data processing system of claim 14, wherein in executing the computer usable program code to post the user question as an unanswered question, the processor further executes the computer usable program code to determine that the last answer is not similar below a last threshold to any answer in the identified Q and A row that is not the last answer, and in response, post the user question as an unanswered question.
A server prevents duplicate posts within a question and answer forum. The server may compare the user question vector to each of the plurality of corpus question vectors to determine the closest match between the user question vector and the corpus question vectors to obtain an identified question and answer row, and determine if the identified Q and A row has a last answer that has a corresponding confidence to the question of the identified Q and A row that exceeds a confidence threshold. Responsive to a positive determination, the server may determine if the user question is similar to a question in the identified Q and A row, and if so the server may determine that the last answer is similar to any answer in the identified Q and A row that is not the last answer, and in response, block the submission of the user question.1-7. (canceled) 8. A computer program product for preventing duplicate posts within a Q and A forum, the computer program product comprising: a computer usable medium having computer usable program code embodied therewith, the computer program product comprising: computer usable program code configured to receive a user question from a user at the Q and A forum; computer usable program code configured to apply natural language processing to the user question to form a user question vector; computer usable program code configured to apply natural language processing to each question in a question and answer (Q and A) corpus to form a plurality of corpus question vectors, wherein each question is in a row having at least the each question; computer usable program code configured to compare the user question vector to each of the plurality of corpus question vectors to determine a closest match between the user question vector and the corpus question vectors to obtain an identified question and answer (Q and A) row; computer usable program code configured to determine if the identified Q and A row has a last answer that has a corresponding confidence to the question of the identified Q and A row that exceeds a confidence threshold and in response, computer usable program code configured to determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold, and if so, determine that at least one pairing of the last answer to another answer in the identified Q and A row has a similarity exceeding a last threshold to any answer in the identified Q and A row that is not the last answer, and in response, block the submission of the user question as a distinct question and directing the user to at least one answer of the identified Q and A row; and if not, determine that the last answer is similar above a last threshold to any answer in the identified Q and A row that is not the last answer, and in response, the computer usable program code configured to append the user question to the identified Q and A row. 9. The computer program product of claim 8, wherein computer usable program code configured to determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold comprises: computer usable program code configured to iterate over all questions in the identified Q and A row and to compare a user question vector to each of the question vectors to each of all questions to determine a question in the identified Q and A row having the highest similarity to the user question; and computer usable program code configured to determine if the question in the identified Q and A row having the highest similarity to the user question exceeds the question similarity threshold. 10. The computer program product of claim 8, wherein computer usable program code configured to determine if the identified Q and A row has the last answer that has the corresponding confidence to the question comprises computer usable program code configured to sum at least one vote corresponding to the last answer. 11. The computer program product of claim 10, wherein the confidence threshold is at least one vote. 12. The computer program product of claim 8, wherein computer usable program code configured to apply natural language processing to the user question further comprises computer usable program code configured to reduce each word to a word root to form the user question vector. 13. The computer program product of claim 12, wherein computer usable program code configured to reduce each word to a word root further comprises replacing the word root with a preferred synonym. 14. A data processing system comprising: a bus; a computer readable tangible storage device connected to the bus, wherein computer usable code is located in the computer readable tangible storage device; a communication unit connected to the bus; and a processing unit connected to the bus, wherein the processor executes the computer usable code for preventing duplicate posts within a Q and A forum, wherein the processor executes the computer usable program code to receive a user question from a user at the Q and A forum; apply natural language processing to the user question to form a user question vector; apply natural language processing to each question in a question and answer (Q and A) corpus to form a plurality of corpus question vectors, wherein each question is in a row having at least the each question; compare the user question vector to each of the plurality of corpus question vectors to determine a closest match between the user question vector and the corpus question vectors to obtain an identified question and answer (Q and A) row; determine if the identified Q and A row has a last answer that has a corresponding confidence to the question of the identified Q and A row that exceeds a confidence threshold and in response, determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold, and if so, determine that at least one pairing of the last answer to another answer in the identified Q and A row has a similarity exceeding a last threshold, and in response, block the submission of the user question as a distinct question and direct the user to at least one answer of the identified Q and A row; and if not, post the user question as an unanswered question. 15. The data processing system of claim 14, wherein in executing the computer usable program code to determine if the user question has a higher similarity to a question in the identified Q and A row as compared to a question similarity threshold the processor further executes the computer usable program code to iterate over all questions in the identified Q and A row and comparing a user question vector to each of the question vectors to each of all questions to determine a question in the identified Q and A row having the highest similarity to the user question; and determine if the question in the identified Q and A row having the highest similarity to the user question exceeds the question similarity threshold. 16. The data processing system of claim 14, wherein in executing the computer usable program code to determine if the identified Q and A row has the last answer that has the corresponding confidence to the question, the processor further executes the computer usable program code to sum at least one vote corresponding to the last answer. 17. The data processing system of claim 16, wherein the confidence threshold is at least one vote. 18. The data processing system of claim 14, wherein the post the user question as an unanswered question comprises storing the user question to the Q and A corpus as a new row to the Q and A corpus. 19. The data processing system of claim 14, wherein in executing the computer usable program code to apply natural language processing to the user question the processor further executes the computer usable program code to reduce each word to a word root to form the user question vector. 20. The data processing system of claim 14, wherein in executing the computer usable program code to post the user question as an unanswered question, the processor further executes the computer usable program code to determine that the last answer is not similar below a last threshold to any answer in the identified Q and A row that is not the last answer, and in response, post the user question as an unanswered question.
2,600
10,283
10,283
15,799,393
2,683
A system for providing user-defined event notifications is describe. The system typically comprises an intelligent digital assistant coupled to a local area network for receiving an audible event definition from a user, an electronic system coupled to the local area network for determining whether a first user-defined event has occurred, and an event detection and notification server, coupled to the intelligent digital assistant and the first electronic system via a wide-area network. The event detection and notification system receives an audible event definition from the intelligent digital assistant, interpret the audible event definition to determine that the first user-defined event is present, identify the first electronic system as a system that determines whether the event occurs, determine that the first user-defined event occurred communications with the first electronic system, and provide a notification to the user that the first user-defined event has occurred.
1. A system for providing user-defined event notifications, comprising: an intelligent digital assistant coupled to a local area network for receiving an audible event definition from a user; a plurality of electronic systems coupled to the local area network each for determining whether an event has occurred; and an event detection and notification server, coupled to the intelligent digital assistant and the plurality of electronic systems via a wide-area network, the event detection and notification system, comprising: a memory for storing processor-executable instructions and user account information; a network interface to the wide-area network; and a processor coupled to the memory and the network interface for executing the processor-executable instructions that causes the event detection and notification system to: receive, by the processor via the network interface, an audible event definition from the intelligent digital assistant, the audible event definition comprising an audible request from a user to the intelligent digital assistant to be notified when a first user-defined event occurs; interpret the audible event definition to identify from amongst the plurality of electronic systems a one of the plurality of electronic systems that can be used to determine whether the first user-defined event occurs; determine, based at least in part on signal information generated by the identified one of the plurality of electronic systems, that the first user-defined event has occurred; and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event has occurred. 2. The system of claim 1, wherein the audible event definition further comprises an instruction on how to be notified when the first user-defined event occurs, and the intelligent digital assistant further comprises an audio output module; wherein the processor-executable instructions further comprise instructions that cause the processor to provide an audio stream to the intelligent digital assistant for audible notification by the audio output module that the first user-defined event has occurred. 3. The system of claim 1, wherein the audible event definition comprises an instruction on how to be notified when the first user-defined event occurs, and the processor-executable instructions that causes the processor to generate the notification further comprises instructions that causes the event detection and notification server to: encode the alert into a format suitable for transmission to the user in accordance with the instruction; identify a communication system associated with the instruction; and transmit, by the processor via the network interface, the notification to the user via the identified communication system. 4. The system of claim 1, wherein the audible event definition comprises an electronic device destination address, and the processor-executable instructions that cause the processor to generate the notification further comprises instructions that causes the intelligent digital assistant to: encode the notification into a format suitable for transmission to the user in accordance with the electronic device destination address; determine, by the processor via the network interface, a wide-area communication network over which to transmit the notification in accordance with the electronic device destination address; and transmit, by the processor via the network interface, the notification to the user via the wide-area communication system. 5. The system of claim 1, wherein the audible event definition further comprises an audible request from the user to the intelligent digital assistant to be notified when a second user-defined event occurs, the instructions cause the event detection and notification system to interpret the audible event definition to identify from amongst the plurality of electronic systems a further one of the plurality of electronic systems that can be used to determine whether the second user-defined event occurs, determine, based at least in part on signal information generated by the identified further one of the plurality of electronic systems, that the first user-defined event has occurred, and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred, the identified one of the plurality of electronic systems comprises a home security system, the identified further one of the plurality of electronic systems comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when the user is leaving a dwelling and that at least one light in the dwelling is on. 6. The system of claim 5, wherein the intelligent digital assistant comprises an audio output module; wherein the processor-executable instructions comprise further instructions that cause the event determination and notification system to provide an audio stream to the intelligent digital assistant for audible notification by the audio output module that the user is leaving the dwelling and the at least one light is on. 7. The system of claim 1, wherein the audible event definition further comprises an audible request from the user to the intelligent digital assistant to be notified when a second user-defined event occurs, the instructions cause the event detection and notification system to interpret the audible event definition to identify from amongst the plurality of electronic systems a further one of the plurality of electronic systems that can be used to determine whether the second user-defined event occurs, determine, based at least in part on signal information generated by the identified further one of the plurality of electronic systems, that the first user-defined event has occurred, and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred, the identified one of the plurality of electronic systems comprises a home security system, the identified further one of the plurality of electronic systems comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when a door has been opened and that at least one light in the dwelling is on. 8. The system of claim 7, wherein the intelligent digital assistant comprises an audio output module; wherein the processor-executable instructions comprise further instructions that cause the event determination and notification system to provide an audio stream to the intelligent digital assistant for audible notification by the audio output module that the door has been opened and the at least one light is on. 9. The system of claim 1, wherein the audible event definition further comprises an audible request from the user to the intelligent digital assistant to be notified when a second user-defined event occurs, the instructions cause the event detection and notification system to interpret the audible event definition to identify from amongst the plurality of electronic systems a further one of the plurality of electronic systems that can be used to determine whether the second user-defined event occurs, determine, based at least in part on signal information generated by the identified further one of the plurality of electronic systems, that the first user-defined event has occurred, and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred, the identified one of the plurality of electronic systems comprises a home security system, the identified further one of the plurality of electronic systems comprises a home heating and/or cooling system, and the audible event definition comprises an instruction to alert the user when a door or a window is open for more than a predetermined time period and that the heating and/or cooling system is active. 10. The system of claim 9, wherein the intelligent digital assistant comprises an audio output module; wherein the notification comprises an audible alert sent to the intelligent digital assistant via the audio output module when the door or the window is open for more than the predetermined time period and that the heating and/or cooling system is active. 11. The system of claim 1, wherein the intelligent digital assistant comprises an audio output module, and the processor-executable instructions further comprise; generate an audio message confirming the audio event instruction; and providing the audio message to the intelligent digital assistant, wherein the audio output module provides audio confirmation of the audible event definition. 12. A method performed by an event detection and notification server for providing user-defined event notifications, comprising: receiving, by a processor via a network interface, a textual interpretation of an audible event definition from a voice servicing system, the audible event definition comprising an audible request from a user of an intelligent digital assistant to be notified when a first user-defined event occurs; identifying, by the processor based upon the textual representation, a first one of a plurality of electronic systems that can be used to determine whether the first user-defined event occurs; monitoring, by the processor via the network interface, the first one of the plurality of electronic system to determine whether the first user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event has occurred. 13. The method of claim 12, wherein the notification comprises an audio stream for playback by the intelligent digital assistant for audible notification by the intelligent digital assistant that the first user-defined event has occurred. 14. The method of claim 12, wherein the audible event definition comprises an instruction on how to be notified when the first user-defined event occurs, the method further comprising: encoding the alert into a format suitable for transmission to the user in accordance with the instruction; identifying a communication system associated with the instruction; and transmitting, by the processor via the network interface, the notification to the user via the identified communication system. 15. The method of claim 12, wherein the audible event definition comprises an electronic device destination address, the method further comprising: encoding the notification into a format suitable for transmission to the user in accordance with the electronic device destination address; determining, by the processor via the network interface, a wide-area communication network over which to transmit the notification in accordance with the electronic device destination address; and transmitting, by the processor via the network interface, the notification to the user via the wide-area communication system. 16. The method of claim 12, wherein the audible event definition further comprises an audible request from the user of the intelligent digital assistant to be notified when a second user-defined event occurs; wherein the method further comprises identifying, by the processor based upon the textual representation, a second one of a plurality of electronic systems that can be used to determine whether the second user-defined event occurs; monitoring, by the processor via the network interface, the second one of the plurality of electronic system to determine whether the second user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred; and wherein the first one of the plurality of electronic systems comprises a home security system, the second one of the plurality of electronic systems comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when the user is leaving a dwelling and that at least one light in the dwelling is on. 17. The method of claim 16, wherein the intelligent digital assistant comprises an audio output module, wherein the notification comprises an audio stream that is caused to be provided to the intelligent digital assistant for audible notification by the audio output module. 18. The method of claim 12, wherein the audible event definition further comprises an audible request from the user of the intelligent digital assistant to be notified when a second user-defined event occurs; wherein the method further comprises identifying, by the processor based upon the textual representation, a second one of a plurality of electronic systems that can be used to determine whether the second user-defined event occurs; monitoring, by the processor via the network interface, the second one of the plurality of electronic system to determine whether the second user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred; and wherein the first one of the plurality of electronic systems comprises a home security system, the second electronic system comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when a door has been opened and that at least one light in the dwelling is on. 19. The method of claim 16, wherein the intelligent digital assistant comprises an audio output module, wherein the notification comprises an audio stream that is caused to be provided to the intelligent digital assistant for audible notification by the audio output module. 20. The method of claim 12, wherein the audible event definition further comprises an audible request from the user of the intelligent digital assistant to be notified when a second user-defined event occurs; wherein the method further comprises identifying, by the processor based upon the textual representation, a second one of a plurality of electronic systems that can be used to determine whether the second user-defined event occurs; monitoring, by the processor via the network interface, the second one of the plurality of electronic system to determine whether the second user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred; wherein the first one of the plurality electronic systems comprises a home security system, the second electronic systems comprises a home heating and/or cooling system, and the audible event definition comprises an instruction to alert the user when a door or a window is open for more than a predetermined time period and that the heating and/or cooling system is active. 21. The method of claim 16, wherein the intelligent digital assistant comprises an audio output module, wherein the notification comprises an audio stream that is caused to be provided to the intelligent digital assistant for audible notification by the audio output module. 22. The method of claim 12, wherein the intelligent digital assistant comprises an audio output module, wherein the method further comprises: generating an audio message confirming the audio event instruction; and providing the audio message to the intelligent digital assistant, wherein the audio output module provides audio confirmation of the audible event definition.
A system for providing user-defined event notifications is describe. The system typically comprises an intelligent digital assistant coupled to a local area network for receiving an audible event definition from a user, an electronic system coupled to the local area network for determining whether a first user-defined event has occurred, and an event detection and notification server, coupled to the intelligent digital assistant and the first electronic system via a wide-area network. The event detection and notification system receives an audible event definition from the intelligent digital assistant, interpret the audible event definition to determine that the first user-defined event is present, identify the first electronic system as a system that determines whether the event occurs, determine that the first user-defined event occurred communications with the first electronic system, and provide a notification to the user that the first user-defined event has occurred.1. A system for providing user-defined event notifications, comprising: an intelligent digital assistant coupled to a local area network for receiving an audible event definition from a user; a plurality of electronic systems coupled to the local area network each for determining whether an event has occurred; and an event detection and notification server, coupled to the intelligent digital assistant and the plurality of electronic systems via a wide-area network, the event detection and notification system, comprising: a memory for storing processor-executable instructions and user account information; a network interface to the wide-area network; and a processor coupled to the memory and the network interface for executing the processor-executable instructions that causes the event detection and notification system to: receive, by the processor via the network interface, an audible event definition from the intelligent digital assistant, the audible event definition comprising an audible request from a user to the intelligent digital assistant to be notified when a first user-defined event occurs; interpret the audible event definition to identify from amongst the plurality of electronic systems a one of the plurality of electronic systems that can be used to determine whether the first user-defined event occurs; determine, based at least in part on signal information generated by the identified one of the plurality of electronic systems, that the first user-defined event has occurred; and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event has occurred. 2. The system of claim 1, wherein the audible event definition further comprises an instruction on how to be notified when the first user-defined event occurs, and the intelligent digital assistant further comprises an audio output module; wherein the processor-executable instructions further comprise instructions that cause the processor to provide an audio stream to the intelligent digital assistant for audible notification by the audio output module that the first user-defined event has occurred. 3. The system of claim 1, wherein the audible event definition comprises an instruction on how to be notified when the first user-defined event occurs, and the processor-executable instructions that causes the processor to generate the notification further comprises instructions that causes the event detection and notification server to: encode the alert into a format suitable for transmission to the user in accordance with the instruction; identify a communication system associated with the instruction; and transmit, by the processor via the network interface, the notification to the user via the identified communication system. 4. The system of claim 1, wherein the audible event definition comprises an electronic device destination address, and the processor-executable instructions that cause the processor to generate the notification further comprises instructions that causes the intelligent digital assistant to: encode the notification into a format suitable for transmission to the user in accordance with the electronic device destination address; determine, by the processor via the network interface, a wide-area communication network over which to transmit the notification in accordance with the electronic device destination address; and transmit, by the processor via the network interface, the notification to the user via the wide-area communication system. 5. The system of claim 1, wherein the audible event definition further comprises an audible request from the user to the intelligent digital assistant to be notified when a second user-defined event occurs, the instructions cause the event detection and notification system to interpret the audible event definition to identify from amongst the plurality of electronic systems a further one of the plurality of electronic systems that can be used to determine whether the second user-defined event occurs, determine, based at least in part on signal information generated by the identified further one of the plurality of electronic systems, that the first user-defined event has occurred, and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred, the identified one of the plurality of electronic systems comprises a home security system, the identified further one of the plurality of electronic systems comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when the user is leaving a dwelling and that at least one light in the dwelling is on. 6. The system of claim 5, wherein the intelligent digital assistant comprises an audio output module; wherein the processor-executable instructions comprise further instructions that cause the event determination and notification system to provide an audio stream to the intelligent digital assistant for audible notification by the audio output module that the user is leaving the dwelling and the at least one light is on. 7. The system of claim 1, wherein the audible event definition further comprises an audible request from the user to the intelligent digital assistant to be notified when a second user-defined event occurs, the instructions cause the event detection and notification system to interpret the audible event definition to identify from amongst the plurality of electronic systems a further one of the plurality of electronic systems that can be used to determine whether the second user-defined event occurs, determine, based at least in part on signal information generated by the identified further one of the plurality of electronic systems, that the first user-defined event has occurred, and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred, the identified one of the plurality of electronic systems comprises a home security system, the identified further one of the plurality of electronic systems comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when a door has been opened and that at least one light in the dwelling is on. 8. The system of claim 7, wherein the intelligent digital assistant comprises an audio output module; wherein the processor-executable instructions comprise further instructions that cause the event determination and notification system to provide an audio stream to the intelligent digital assistant for audible notification by the audio output module that the door has been opened and the at least one light is on. 9. The system of claim 1, wherein the audible event definition further comprises an audible request from the user to the intelligent digital assistant to be notified when a second user-defined event occurs, the instructions cause the event detection and notification system to interpret the audible event definition to identify from amongst the plurality of electronic systems a further one of the plurality of electronic systems that can be used to determine whether the second user-defined event occurs, determine, based at least in part on signal information generated by the identified further one of the plurality of electronic systems, that the first user-defined event has occurred, and provide, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred, the identified one of the plurality of electronic systems comprises a home security system, the identified further one of the plurality of electronic systems comprises a home heating and/or cooling system, and the audible event definition comprises an instruction to alert the user when a door or a window is open for more than a predetermined time period and that the heating and/or cooling system is active. 10. The system of claim 9, wherein the intelligent digital assistant comprises an audio output module; wherein the notification comprises an audible alert sent to the intelligent digital assistant via the audio output module when the door or the window is open for more than the predetermined time period and that the heating and/or cooling system is active. 11. The system of claim 1, wherein the intelligent digital assistant comprises an audio output module, and the processor-executable instructions further comprise; generate an audio message confirming the audio event instruction; and providing the audio message to the intelligent digital assistant, wherein the audio output module provides audio confirmation of the audible event definition. 12. A method performed by an event detection and notification server for providing user-defined event notifications, comprising: receiving, by a processor via a network interface, a textual interpretation of an audible event definition from a voice servicing system, the audible event definition comprising an audible request from a user of an intelligent digital assistant to be notified when a first user-defined event occurs; identifying, by the processor based upon the textual representation, a first one of a plurality of electronic systems that can be used to determine whether the first user-defined event occurs; monitoring, by the processor via the network interface, the first one of the plurality of electronic system to determine whether the first user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event has occurred. 13. The method of claim 12, wherein the notification comprises an audio stream for playback by the intelligent digital assistant for audible notification by the intelligent digital assistant that the first user-defined event has occurred. 14. The method of claim 12, wherein the audible event definition comprises an instruction on how to be notified when the first user-defined event occurs, the method further comprising: encoding the alert into a format suitable for transmission to the user in accordance with the instruction; identifying a communication system associated with the instruction; and transmitting, by the processor via the network interface, the notification to the user via the identified communication system. 15. The method of claim 12, wherein the audible event definition comprises an electronic device destination address, the method further comprising: encoding the notification into a format suitable for transmission to the user in accordance with the electronic device destination address; determining, by the processor via the network interface, a wide-area communication network over which to transmit the notification in accordance with the electronic device destination address; and transmitting, by the processor via the network interface, the notification to the user via the wide-area communication system. 16. The method of claim 12, wherein the audible event definition further comprises an audible request from the user of the intelligent digital assistant to be notified when a second user-defined event occurs; wherein the method further comprises identifying, by the processor based upon the textual representation, a second one of a plurality of electronic systems that can be used to determine whether the second user-defined event occurs; monitoring, by the processor via the network interface, the second one of the plurality of electronic system to determine whether the second user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred; and wherein the first one of the plurality of electronic systems comprises a home security system, the second one of the plurality of electronic systems comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when the user is leaving a dwelling and that at least one light in the dwelling is on. 17. The method of claim 16, wherein the intelligent digital assistant comprises an audio output module, wherein the notification comprises an audio stream that is caused to be provided to the intelligent digital assistant for audible notification by the audio output module. 18. The method of claim 12, wherein the audible event definition further comprises an audible request from the user of the intelligent digital assistant to be notified when a second user-defined event occurs; wherein the method further comprises identifying, by the processor based upon the textual representation, a second one of a plurality of electronic systems that can be used to determine whether the second user-defined event occurs; monitoring, by the processor via the network interface, the second one of the plurality of electronic system to determine whether the second user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred; and wherein the first one of the plurality of electronic systems comprises a home security system, the second electronic system comprises a home monitoring system, and the audible event definition comprises an instruction to alert the user when a door has been opened and that at least one light in the dwelling is on. 19. The method of claim 16, wherein the intelligent digital assistant comprises an audio output module, wherein the notification comprises an audio stream that is caused to be provided to the intelligent digital assistant for audible notification by the audio output module. 20. The method of claim 12, wherein the audible event definition further comprises an audible request from the user of the intelligent digital assistant to be notified when a second user-defined event occurs; wherein the method further comprises identifying, by the processor based upon the textual representation, a second one of a plurality of electronic systems that can be used to determine whether the second user-defined event occurs; monitoring, by the processor via the network interface, the second one of the plurality of electronic system to determine whether the second user-defined event occurs; and providing, by the processor via the network interface, a notification to the user in response to it being determined that the first user-defined event and the second user-defined events have occurred; wherein the first one of the plurality electronic systems comprises a home security system, the second electronic systems comprises a home heating and/or cooling system, and the audible event definition comprises an instruction to alert the user when a door or a window is open for more than a predetermined time period and that the heating and/or cooling system is active. 21. The method of claim 16, wherein the intelligent digital assistant comprises an audio output module, wherein the notification comprises an audio stream that is caused to be provided to the intelligent digital assistant for audible notification by the audio output module. 22. The method of claim 12, wherein the intelligent digital assistant comprises an audio output module, wherein the method further comprises: generating an audio message confirming the audio event instruction; and providing the audio message to the intelligent digital assistant, wherein the audio output module provides audio confirmation of the audible event definition.
2,600
10,284
10,284
15,506,908
2,652
An audio playback device may include a speaker configured to play media content, at least one interface including a switch, a transceiver configured to one of transmit the media content to first audio playback device and receive the media content from a second audio playback device; and a controller configured to receive a signal indicative of a user input that is received at the switch for a time duration and to one of transmit the media content to the second audio playback device via the transceiver for playback of the media content at the second audio playback device and to receive the media content from the second audio playback device via the transceiver based on the time duration.
1. A first audio playback device comprising: a speaker configured to play media content; at least one interface including a switch; a transceiver configured to one of transmit the media content to first audio playback device and receive the media content from a second audio playback device; and a controller configured to receive a signal indicative of a user input that is received at the switch for a time duration and to one of transmit the media content to the second audio playback device via the transceiver for playback of the media content at the second audio playback device and to receive the media content from the second audio playback device via the transceiver based on the time duration. 2. The first audio playback device of claim 1 wherein the controller is further configured to transmit the media content to the second audio playback device in the event the time duration is equal to a first time threshold. 3. The first audio playback device of claim 2 wherein the controller is further configured to receive the media content from the second audio playback device in the event the time duration is equal to a second time threshold, wherein the first time threshold is different from the second time threshold. 4. The first audio playback device of claim 3 wherein the second time threshold is greater than the first time threshold. 5. The first audio playback device of claim 1 wherein the controller is further configured to transmit the media content to the second audio playback device via the transceiver for concurrent playback of the media content at each of the first audio playback device and the second audio playback device based on the time duration. 6. The first audio playback device of claim 1 wherein the controller is further configured to receive the media content from the second audio playback device via the transceiver for concurrent playback of the media content at each of the first audio playback device and the second audio playback device based on the time duration. 7. The first audio playback device of claim 1 further comprising a status indicator to indicate at least one of an active status, an idle status, and a party status, wherein the party status is indicative of transmission of the media content to a plurality of playback devices via the transceiver for playback of the media content at each of the plurality of playback devices. 8. A media content playback system, comprising: a first playback device and a second playback device, each playback device having at least one pushbutton and each configured to transmit or receive media content via a wireless transceiver, wherein upon depression of the pushbutton at the first playback device, media content is transmitted therefrom to the second playback device. 9. The system of claim 8 wherein upon depression of the pushbutton at the second playback device, media content is received at the second playback device from the first playback device. 10. The system of claim 9 wherein media content is transmitted to the second playback device in response to a time duration of the depression of the pushbutton exceeding a predefined time threshold. 11. The system of claim 8 wherein the second playback device is configured to transmit media content to the first playback device in response to a time duration of the depression of the pushbutton at the second playback device exceeding a first predefined time threshold. 12. The system of claim 9 further comprising a third media playback device configured to receive media content from the first playback device concurrently with the second playback device receiving media content from the first playback device in response to a time duration of the depression of the pushbutton at one of the playback devices exceeding a second predefined time threshold, the second predefined time threshold being greater than the first predefined time threshold. 13. The system of claim 8 wherein each of the media playback devices includes a status indicator configured to indicate a status of the respective speaker. 14. The system of claim 11 wherein the status indicator may indicate at least one of an active, idle and party status, wherein the party status is indicative of transmission of the media content to a plurality of playback devices for playback of the media content at each of the plurality of audio playback devices. 15. A method, comprising: presenting, via a mobile device, a party mode screen including a list of selectable icons, each icon associated with a first location; receiving a selection of at least one of the selectable icons; updating the party mode screen to reflect the selection of the at least one of the selectable icons; and presenting a party mode complete screen in response to each of the selectable icons being selected. 16. The method of claim 15, further comprising presenting a room settings screen including at least one volume control associated with at least one of the first locations. 17. The method of claim 16, further comprising displaying a playlist screen including a repeat mode icon indicative of a repeat quantity of a current playlist. 18. The method of claim 17, wherein the playlist screen includes a list of media content and identifies currently played media content. 19. The method of claim 15, further comprising transmitting instructions to a plurality of speakers based on the selection of the at least one of the selectable icons, the instructions including media content for playback at the plurality of speakers. 20. The method of claim 15, further comprising presenting a location overview screen including an active room list indicating at least one second location having an active speaker therein, and an in-active room list indicating at least one third location have an in-active speaker therein.
An audio playback device may include a speaker configured to play media content, at least one interface including a switch, a transceiver configured to one of transmit the media content to first audio playback device and receive the media content from a second audio playback device; and a controller configured to receive a signal indicative of a user input that is received at the switch for a time duration and to one of transmit the media content to the second audio playback device via the transceiver for playback of the media content at the second audio playback device and to receive the media content from the second audio playback device via the transceiver based on the time duration.1. A first audio playback device comprising: a speaker configured to play media content; at least one interface including a switch; a transceiver configured to one of transmit the media content to first audio playback device and receive the media content from a second audio playback device; and a controller configured to receive a signal indicative of a user input that is received at the switch for a time duration and to one of transmit the media content to the second audio playback device via the transceiver for playback of the media content at the second audio playback device and to receive the media content from the second audio playback device via the transceiver based on the time duration. 2. The first audio playback device of claim 1 wherein the controller is further configured to transmit the media content to the second audio playback device in the event the time duration is equal to a first time threshold. 3. The first audio playback device of claim 2 wherein the controller is further configured to receive the media content from the second audio playback device in the event the time duration is equal to a second time threshold, wherein the first time threshold is different from the second time threshold. 4. The first audio playback device of claim 3 wherein the second time threshold is greater than the first time threshold. 5. The first audio playback device of claim 1 wherein the controller is further configured to transmit the media content to the second audio playback device via the transceiver for concurrent playback of the media content at each of the first audio playback device and the second audio playback device based on the time duration. 6. The first audio playback device of claim 1 wherein the controller is further configured to receive the media content from the second audio playback device via the transceiver for concurrent playback of the media content at each of the first audio playback device and the second audio playback device based on the time duration. 7. The first audio playback device of claim 1 further comprising a status indicator to indicate at least one of an active status, an idle status, and a party status, wherein the party status is indicative of transmission of the media content to a plurality of playback devices via the transceiver for playback of the media content at each of the plurality of playback devices. 8. A media content playback system, comprising: a first playback device and a second playback device, each playback device having at least one pushbutton and each configured to transmit or receive media content via a wireless transceiver, wherein upon depression of the pushbutton at the first playback device, media content is transmitted therefrom to the second playback device. 9. The system of claim 8 wherein upon depression of the pushbutton at the second playback device, media content is received at the second playback device from the first playback device. 10. The system of claim 9 wherein media content is transmitted to the second playback device in response to a time duration of the depression of the pushbutton exceeding a predefined time threshold. 11. The system of claim 8 wherein the second playback device is configured to transmit media content to the first playback device in response to a time duration of the depression of the pushbutton at the second playback device exceeding a first predefined time threshold. 12. The system of claim 9 further comprising a third media playback device configured to receive media content from the first playback device concurrently with the second playback device receiving media content from the first playback device in response to a time duration of the depression of the pushbutton at one of the playback devices exceeding a second predefined time threshold, the second predefined time threshold being greater than the first predefined time threshold. 13. The system of claim 8 wherein each of the media playback devices includes a status indicator configured to indicate a status of the respective speaker. 14. The system of claim 11 wherein the status indicator may indicate at least one of an active, idle and party status, wherein the party status is indicative of transmission of the media content to a plurality of playback devices for playback of the media content at each of the plurality of audio playback devices. 15. A method, comprising: presenting, via a mobile device, a party mode screen including a list of selectable icons, each icon associated with a first location; receiving a selection of at least one of the selectable icons; updating the party mode screen to reflect the selection of the at least one of the selectable icons; and presenting a party mode complete screen in response to each of the selectable icons being selected. 16. The method of claim 15, further comprising presenting a room settings screen including at least one volume control associated with at least one of the first locations. 17. The method of claim 16, further comprising displaying a playlist screen including a repeat mode icon indicative of a repeat quantity of a current playlist. 18. The method of claim 17, wherein the playlist screen includes a list of media content and identifies currently played media content. 19. The method of claim 15, further comprising transmitting instructions to a plurality of speakers based on the selection of the at least one of the selectable icons, the instructions including media content for playback at the plurality of speakers. 20. The method of claim 15, further comprising presenting a location overview screen including an active room list indicating at least one second location having an active speaker therein, and an in-active room list indicating at least one third location have an in-active speaker therein.
2,600
10,285
10,285
15,851,215
2,684
The present disclosure relates generally to client server communications and processing mechanism for use in a radar unit that can be mounted in a car, e.g. on the dash of the car, and a mobile computer that can be placed in a receptacle mounted, e.g., in the interior along the firewall of the car. The client server communications and processing mechanism enables communications between the units that can be one way, both ways, or a combination of both depending on the type of data needed to be communicated. The communications between the units includes speed tracking data and lock and release data.
1. A system for managing and remotely displaying detection information of a speed measuring device, comprising: a personal computer; at least one storage resource for managing resources of the system at the speed measuring device wherein the speed measuring device processor is configured to execute application code instruction that are stored in the storage resource to cause the system to: receive vehicle tracking data at the speed measuring device; store the tracking data at the speed measuring device in the storage resource; send the tracking data to the personal computing device; receive the tracking data from the speed measuring device at the personal computing device; store the tracking data at the personal computing device in the storage resource; and display the tracking data at the personal computing device; wherein the speed measuring device is communicable coupled with the personal computing device over a peer to peer wired or wireless network. 2. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to send the tracking data further includes application code instruction to cause the system to automatically send tracking data to the personal computing device when a speed associated with a first vehicle is greater than a speed limit and to display unlocked speed data of a second vehicle. 3. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to send the tracking data further includes application code instruction to cause the system to: receive an automatically generated lock command identifying select tracking data at the speed measuring device for a first target and to allow continued tracking of speed data for a second target; and send the select tracking data and a lock indicator to the personal computing device in response to receiving the automatically generated lock command and displaying the select tracking data and lock indicator. 4. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to send the tracking data further includes application code instruction to cause the system to: receive a user initiated lock command from the personal computing device identifying select tracking data for one of two or more vehicles; and send the select tracking data and a lock indicator to the personal computing device in response to receiving the user initiated lock command for the one of the two or more vehicles while displaying tracking speed data for at least one of the two or more vehicles. 5. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to: receive a lock command identifying select tracking data for one of a plurality of vehicles at the personal computing device; and associate a lock indicator with the select tracking data for the one of the plurality of vehicles in the storage resource in response to receiving a lock command; wherein the lock command is one of an automatically generated lock command and a user initiated lock command. 6. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to associate a lock indicator with select tracking data for one of a plurality of vehicles, in response to receiving a lock command, in the storage resource of at least one of the speed measuring device and the personal computing device while displaying tracking speed data for at least one of the plurality of vehicles. 7. The system of claim 6 wherein in response to the association of the lock indicator with the select tracking data in the storage resource the tracking data displayed on the display of the personal computing device is select tracking data and is locked on the display for one of the plurality of vehicles and active speed data is displayed for another of the plurality of vehicles. 8. The system of claim 6 wherein the processor is configured to execute application code instruction to cause the system to: receive a release request and, in response, removing from storage a lock indicator associated with the select tracking data for a first vehicle from a display that is displaying tracking speed data for another vehicle; wherein the release request is generated at one of the speed measuring device or the personal computer and the lock indicator removed from storage is removed from the storage resource of at least one of the speed device or personal computer. 9. A computer aided method of a system for managing and remotely displaying detection information of a speed measuring device, the method comprising: receiving vehicle tracking data at the speed measuring device; storing the tracking data at the speed measuring device in the storage resource; sending the tracking data to the personal computing device; receiving the tracking data from the speed measuring device at the personal computing device; storing the tracking data at the personal computing device in the storage resource; and displaying the tracking data at the personal computing device; wherein the speed measuring device is communicable coupled with the personal computing device over a peer to peer wired or wireless network. 10. The computer aided method of claim 9 wherein sending the tracking data to the personal computer further comprises automatically sending tracking data to the personal computing device for each of a plurality of vehicles. 11. The computer aided method of claim 9 wherein sending the tracking data to the personal computer further comprises: receiving an automatically generated lock command identifying select tracking data at the speed measuring device for one of the plurality of vehicles; and sending at least one of the select tracking data and a lock indicator to the personal computing device in response to receiving the automatically generated lock command for the one of the plurality of vehicles and displaying the lock indicator and locked tracking data for the one of the plurality of vehicles while displaying active tracking data for another of the plurality of vehicles. 12. The computer-aided method of claim 9 wherein sending the tracking data to the personal computer further comprises: receiving a user initiated lock command identifying select tracking data for one of a plurality of vehicles; and sending the select tracking data and a lock indicator to the personal computing device in response to receiving the user initiated lock command for the one of the plurality of vehicles. 13. The computer-aided method of claim 9 further comprises: receiving a lock command identifying select tracking data at the personal computing device for each of a plurality of vehicles at different times; and associating a lock indicator with the select tracking data in the storage resource in response to receiving a lock command for each of the plurality of vehicles at the time associated with each vehicle; wherein the lock command is one of a automatically generated lock command and a user initiated lock command. 14. A non-transitory computer readable medium containing computer readable instructions for instructing a first and second computing machine to manage and remotely display speed detection information of a speed measuring device, the computer-readable instructions comprising instructions for causing the computing machine to: receive vehicle tracking data at the speed measuring device; store the tracking data at the speed measuring device in a storage resource; send the tracking data to the personal computing device; receive the tracking data from the speed measuring device at the personal computing device; store the tracking data at the personal computing device in a storage resource; and display the tracking data at the personal computing device; wherein the speed measuring device is communicable coupled with the personal computing device over a peer to peer wired or wireless network. 15. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to send the tracking data further includes application code instruction to cause the system to automatically send tracking data to the personal computing device for each of a plurality of vehicles. 16. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: receive an automatically generated lock command identifying select tracking data at the speed measuring device from a plurality of tracking data associated with a plurality of vehicles; and send at least one of the select tracking data and a lock indicator to the personal computing device in response to receiving the automatically generated lock command for only one of the plurality of vehicles while continuing to display the tracking data for the other of the plurality of vehicles. 17. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: receive a user initiated lock command identifying select tracking data for one of a plurality of vehicles; and send the select tracking data and a lock indicator to the personal computing device in response to receiving the user initiated lock command from the personal computing device for the one of the plurality of vehicles. 18. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: receive a lock command identifying select tracking data for one of a plurality of vehicles at the personal computing device; and associate a lock indicator with the select tracking data in the storage resource in response to receiving a lock command for the one of the plurality of vehicles while not locking another of the plurality of vehicles; wherein the lock command is one of an automatically generated lock command and a user initiated lock command. 19. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: associate a lock indicator with select tracking data for one of the plurality of vehicles, in response to receiving a lock command, in the storage resource of at least one of the speed measuring device and the personal computing device and to display tracking speed data for another of the plurality of vehicles. 20. The non-transitory computer readable medium of claim 19 wherein in response to the association of the lock indicator with the select tracking data in the storage resource for the one of the plurality of vehicles, the tracking data displayed on the display of the personal computing device is select tracking data and locked on the display while the tracking speed data for the another of the plurality of vehicles is not locked on the display.
The present disclosure relates generally to client server communications and processing mechanism for use in a radar unit that can be mounted in a car, e.g. on the dash of the car, and a mobile computer that can be placed in a receptacle mounted, e.g., in the interior along the firewall of the car. The client server communications and processing mechanism enables communications between the units that can be one way, both ways, or a combination of both depending on the type of data needed to be communicated. The communications between the units includes speed tracking data and lock and release data.1. A system for managing and remotely displaying detection information of a speed measuring device, comprising: a personal computer; at least one storage resource for managing resources of the system at the speed measuring device wherein the speed measuring device processor is configured to execute application code instruction that are stored in the storage resource to cause the system to: receive vehicle tracking data at the speed measuring device; store the tracking data at the speed measuring device in the storage resource; send the tracking data to the personal computing device; receive the tracking data from the speed measuring device at the personal computing device; store the tracking data at the personal computing device in the storage resource; and display the tracking data at the personal computing device; wherein the speed measuring device is communicable coupled with the personal computing device over a peer to peer wired or wireless network. 2. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to send the tracking data further includes application code instruction to cause the system to automatically send tracking data to the personal computing device when a speed associated with a first vehicle is greater than a speed limit and to display unlocked speed data of a second vehicle. 3. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to send the tracking data further includes application code instruction to cause the system to: receive an automatically generated lock command identifying select tracking data at the speed measuring device for a first target and to allow continued tracking of speed data for a second target; and send the select tracking data and a lock indicator to the personal computing device in response to receiving the automatically generated lock command and displaying the select tracking data and lock indicator. 4. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to send the tracking data further includes application code instruction to cause the system to: receive a user initiated lock command from the personal computing device identifying select tracking data for one of two or more vehicles; and send the select tracking data and a lock indicator to the personal computing device in response to receiving the user initiated lock command for the one of the two or more vehicles while displaying tracking speed data for at least one of the two or more vehicles. 5. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to: receive a lock command identifying select tracking data for one of a plurality of vehicles at the personal computing device; and associate a lock indicator with the select tracking data for the one of the plurality of vehicles in the storage resource in response to receiving a lock command; wherein the lock command is one of an automatically generated lock command and a user initiated lock command. 6. The system of claim 1 wherein the processor is configured to execute application code instruction to cause the system to associate a lock indicator with select tracking data for one of a plurality of vehicles, in response to receiving a lock command, in the storage resource of at least one of the speed measuring device and the personal computing device while displaying tracking speed data for at least one of the plurality of vehicles. 7. The system of claim 6 wherein in response to the association of the lock indicator with the select tracking data in the storage resource the tracking data displayed on the display of the personal computing device is select tracking data and is locked on the display for one of the plurality of vehicles and active speed data is displayed for another of the plurality of vehicles. 8. The system of claim 6 wherein the processor is configured to execute application code instruction to cause the system to: receive a release request and, in response, removing from storage a lock indicator associated with the select tracking data for a first vehicle from a display that is displaying tracking speed data for another vehicle; wherein the release request is generated at one of the speed measuring device or the personal computer and the lock indicator removed from storage is removed from the storage resource of at least one of the speed device or personal computer. 9. A computer aided method of a system for managing and remotely displaying detection information of a speed measuring device, the method comprising: receiving vehicle tracking data at the speed measuring device; storing the tracking data at the speed measuring device in the storage resource; sending the tracking data to the personal computing device; receiving the tracking data from the speed measuring device at the personal computing device; storing the tracking data at the personal computing device in the storage resource; and displaying the tracking data at the personal computing device; wherein the speed measuring device is communicable coupled with the personal computing device over a peer to peer wired or wireless network. 10. The computer aided method of claim 9 wherein sending the tracking data to the personal computer further comprises automatically sending tracking data to the personal computing device for each of a plurality of vehicles. 11. The computer aided method of claim 9 wherein sending the tracking data to the personal computer further comprises: receiving an automatically generated lock command identifying select tracking data at the speed measuring device for one of the plurality of vehicles; and sending at least one of the select tracking data and a lock indicator to the personal computing device in response to receiving the automatically generated lock command for the one of the plurality of vehicles and displaying the lock indicator and locked tracking data for the one of the plurality of vehicles while displaying active tracking data for another of the plurality of vehicles. 12. The computer-aided method of claim 9 wherein sending the tracking data to the personal computer further comprises: receiving a user initiated lock command identifying select tracking data for one of a plurality of vehicles; and sending the select tracking data and a lock indicator to the personal computing device in response to receiving the user initiated lock command for the one of the plurality of vehicles. 13. The computer-aided method of claim 9 further comprises: receiving a lock command identifying select tracking data at the personal computing device for each of a plurality of vehicles at different times; and associating a lock indicator with the select tracking data in the storage resource in response to receiving a lock command for each of the plurality of vehicles at the time associated with each vehicle; wherein the lock command is one of a automatically generated lock command and a user initiated lock command. 14. A non-transitory computer readable medium containing computer readable instructions for instructing a first and second computing machine to manage and remotely display speed detection information of a speed measuring device, the computer-readable instructions comprising instructions for causing the computing machine to: receive vehicle tracking data at the speed measuring device; store the tracking data at the speed measuring device in a storage resource; send the tracking data to the personal computing device; receive the tracking data from the speed measuring device at the personal computing device; store the tracking data at the personal computing device in a storage resource; and display the tracking data at the personal computing device; wherein the speed measuring device is communicable coupled with the personal computing device over a peer to peer wired or wireless network. 15. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to send the tracking data further includes application code instruction to cause the system to automatically send tracking data to the personal computing device for each of a plurality of vehicles. 16. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: receive an automatically generated lock command identifying select tracking data at the speed measuring device from a plurality of tracking data associated with a plurality of vehicles; and send at least one of the select tracking data and a lock indicator to the personal computing device in response to receiving the automatically generated lock command for only one of the plurality of vehicles while continuing to display the tracking data for the other of the plurality of vehicles. 17. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: receive a user initiated lock command identifying select tracking data for one of a plurality of vehicles; and send the select tracking data and a lock indicator to the personal computing device in response to receiving the user initiated lock command from the personal computing device for the one of the plurality of vehicles. 18. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: receive a lock command identifying select tracking data for one of a plurality of vehicles at the personal computing device; and associate a lock indicator with the select tracking data in the storage resource in response to receiving a lock command for the one of the plurality of vehicles while not locking another of the plurality of vehicles; wherein the lock command is one of an automatically generated lock command and a user initiated lock command. 19. The non-transitory computer readable medium of claim 14 further includes computer readable instruction to cause the computing machine to: associate a lock indicator with select tracking data for one of the plurality of vehicles, in response to receiving a lock command, in the storage resource of at least one of the speed measuring device and the personal computing device and to display tracking speed data for another of the plurality of vehicles. 20. The non-transitory computer readable medium of claim 19 wherein in response to the association of the lock indicator with the select tracking data in the storage resource for the one of the plurality of vehicles, the tracking data displayed on the display of the personal computing device is select tracking data and locked on the display while the tracking speed data for the another of the plurality of vehicles is not locked on the display.
2,600
10,286
10,286
13,730,441
2,616
The invention provides a method for approximating motion blur in rendered frame from within a graphics driver. For example, the method includes the steps of: (a) obtaining by the graphics driver values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; (b) obtaining by the graphics driver depth values of the current rendered frame; and (c) loading by the graphics driver a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, whereby a motion blur effect is created in the current rendered frame.
1. A method for approximating motion blur in a rendered frame from within a graphics driver, comprising: obtaining values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; obtaining depth values of the current rendered frame; and loading a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, a motion blur effect being created in the current rendered frame. 2. The method of claim 1, wherein obtaining values further comprises identifying a frame transformation matrix in a constant value buffer of a graphics application. 3. The method of claim 1, wherein obtaining depth values further comprises identifying a depth buffer used by the current rendered frame. 4. The method of claim 3, wherein: obtaining depth values further comprises obtaining depth values of the previous rendered frame from the identified depth buffer; and loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the depth values of the previous rendered frame. 5. The method of claim 3, further comprising: obtaining color values of the current rendered frame or the previous rendered frame, wherein loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the color values of the current rendered frame and/or previous rendered frame. 6. The method of claim 1, wherein loading the shader further comprises loading the shader in order to enable the GPU to determine the one or more sample areas so that the one or more sample areas do not include a given object on the current rendered frame 7. The method of claim 6, wherein the given object is a 2D object. 8. The method of claim 1, further comprising determining a number of the one or more sample areas. 9. The method of claim 8, further comprising determining the number of sample areas according to user settings or system operation modes. 10. A computer-readable medium storing instructions, that when executed by a processor, cause the processor to approximate motion blur in a rendered frame from within a graphics driver, by performing the steps of: obtaining values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; obtaining depth values of the current rendered frame; and loading a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, a motion blur effect being created in the current rendered frame. 11. The computer-readable medium of claim 10, wherein obtaining values further comprises identifying a frame transformation matrix in a constant value buffer of a graphics application. 12. The computer-readable medium of claim 10, wherein obtaining depth values further comprises identifying a depth buffer used by the current rendered frame. 13. The computer-readable medium of claim 12, wherein: obtaining depth values further comprises obtaining depth values of the previous rendered frame from the identified depth buffer; and loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the depth values of the previous rendered frame. 14. The computer-readable medium of claim 12, further comprising: obtaining color values of the current rendered frame or the previous rendered frame, wherein loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the color values of the current rendered frame and/or previous rendered frame. 15. The computer-readable medium of claim 10, wherein loading the shader further comprises loading the shader in order to enable the GPU to determine the one or more sample areas so that the one or more sample areas do not include a given object on the current rendered frame 16. The computer-readable medium of claim 15, wherein the given object is a 2D object. 17. The computer-readable medium of claim 10, further comprising determining a number of the one or more sample areas. 18. The computer-readable medium of claim 17, further comprising determining the number of sample areas according to user settings or system operation modes. 19. A computer system, comprising: a graphics processing unit (GPU); a processor coupled to the GPU; and a memory coupled to the processor, wherein the memory includes an application having instructions that, when executed by the processor, cause the processor to: obtain values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; obtain depth values of the current rendered frame; and load a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, a motion blur effect being created in the current rendered frame. 20. The computer system of claim 19, wherein: obtaining depth values further comprises obtaining depth values of the previous rendered frame from the identified depth buffer; and loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the depth values of the previous rendered frame.
The invention provides a method for approximating motion blur in rendered frame from within a graphics driver. For example, the method includes the steps of: (a) obtaining by the graphics driver values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; (b) obtaining by the graphics driver depth values of the current rendered frame; and (c) loading by the graphics driver a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, whereby a motion blur effect is created in the current rendered frame.1. A method for approximating motion blur in a rendered frame from within a graphics driver, comprising: obtaining values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; obtaining depth values of the current rendered frame; and loading a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, a motion blur effect being created in the current rendered frame. 2. The method of claim 1, wherein obtaining values further comprises identifying a frame transformation matrix in a constant value buffer of a graphics application. 3. The method of claim 1, wherein obtaining depth values further comprises identifying a depth buffer used by the current rendered frame. 4. The method of claim 3, wherein: obtaining depth values further comprises obtaining depth values of the previous rendered frame from the identified depth buffer; and loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the depth values of the previous rendered frame. 5. The method of claim 3, further comprising: obtaining color values of the current rendered frame or the previous rendered frame, wherein loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the color values of the current rendered frame and/or previous rendered frame. 6. The method of claim 1, wherein loading the shader further comprises loading the shader in order to enable the GPU to determine the one or more sample areas so that the one or more sample areas do not include a given object on the current rendered frame 7. The method of claim 6, wherein the given object is a 2D object. 8. The method of claim 1, further comprising determining a number of the one or more sample areas. 9. The method of claim 8, further comprising determining the number of sample areas according to user settings or system operation modes. 10. A computer-readable medium storing instructions, that when executed by a processor, cause the processor to approximate motion blur in a rendered frame from within a graphics driver, by performing the steps of: obtaining values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; obtaining depth values of the current rendered frame; and loading a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, a motion blur effect being created in the current rendered frame. 11. The computer-readable medium of claim 10, wherein obtaining values further comprises identifying a frame transformation matrix in a constant value buffer of a graphics application. 12. The computer-readable medium of claim 10, wherein obtaining depth values further comprises identifying a depth buffer used by the current rendered frame. 13. The computer-readable medium of claim 12, wherein: obtaining depth values further comprises obtaining depth values of the previous rendered frame from the identified depth buffer; and loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the depth values of the previous rendered frame. 14. The computer-readable medium of claim 12, further comprising: obtaining color values of the current rendered frame or the previous rendered frame, wherein loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the color values of the current rendered frame and/or previous rendered frame. 15. The computer-readable medium of claim 10, wherein loading the shader further comprises loading the shader in order to enable the GPU to determine the one or more sample areas so that the one or more sample areas do not include a given object on the current rendered frame 16. The computer-readable medium of claim 15, wherein the given object is a 2D object. 17. The computer-readable medium of claim 10, further comprising determining a number of the one or more sample areas. 18. The computer-readable medium of claim 17, further comprising determining the number of sample areas according to user settings or system operation modes. 19. A computer system, comprising: a graphics processing unit (GPU); a processor coupled to the GPU; and a memory coupled to the processor, wherein the memory includes an application having instructions that, when executed by the processor, cause the processor to: obtain values of a frame transformation matrix for a current rendered frame and a previous rendered frame respectively; obtain depth values of the current rendered frame; and load a shader onto a GPU, in order to enable the GPU to adjust color values of one or more sample areas on the current rendered frame, based on at least the values of the frame transformation matrix for the current rendered frame and the previous rendered frame and the depth values of the current rendered frame, a motion blur effect being created in the current rendered frame. 20. The computer system of claim 19, wherein: obtaining depth values further comprises obtaining depth values of the previous rendered frame from the identified depth buffer; and loading the shader further comprises loading the shader in order to enable the GPU to adjust color values further based on the depth values of the previous rendered frame.
2,600
10,287
10,287
15,274,060
2,613
Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.
1-16. (canceled) 17. A system, comprising: a processor; and a non-transitory computer-readable medium configured to store computer-executable instructions that when executed by the processor cause the processor to perform a method, the method comprising: comparing a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together; identifying a position of the vicinity through use of the comparison result; and causing an information set that is indicative of the position to be disclosed on an interface. 18. The system of claim 17, where the information set that is indicative of the position is displayed concurrently with the combination image on the interface. 19. The system of claim 17, where the set of sub-combination images are captured by a launched projectile. 20. The system of claim 17, where the information set is a command message set and where an individual command of the command message set causes a battle command action to occur, when selected, at a locality that is indicated by the location of the combination image. 21. The system of claim 17, where the interface discloses the combination image concurrently with the non-real-time image and where the combination image is displayed on top of the non-real-time image. 22. The system of claim 17, where the interface discloses the combination image concurrently with the non-real-time image and where the non-real-time image is displayed on top of the combination image. 23. The system of claim 17, the method comprising: identifying a common feature set between the combination image and the non-real-time image; aligning the combination image and the non-real-time image through use of the common feature set; and causing the combination image and the non-real-time image to be concurrently disclosed on the interface in an aligned manner, where the information set is disclosed in an aligned manner with the combination image and the non-real-time image. 24. The system of claim 18, where causing the information set that is indicative of the position to be disclosed on the interface occurs in response to a user request. 25. The system of claim 25, where the information set that is indicative of the position is displayed at a user-designated location on the interface. 26. The system of claim 19, where the interface is part of a handheld device and where the launched projectile is launched from the handheld device. 27. The system of claim 26, the method comprising: accessing the non-real-time image from the non-transitory computer-readable medium; where the non-transitory computer-readable medium is resident upon the handheld device and where the processor is resident upon the handheld device. 28. The system of claim 27, the method comprising: modifying the non-real-time image as retained in the non-transitory computer-readable medium in accordance with the combination image, where when a subsequent comparing occurs the m is used as a subsequent non-real-time image. 29. The system of claim 27, the method comprising: replacing the non-real-time image from the non-transitory computer-readable medium with the combination image, where when a subsequent comparing occurs the combination image is used as a subsequent non-real-time image. 30. The system of claim 27, where the handheld device is coupled to the projectile by way of a singular physical link, where the singular physical link is used to transmit an image set from the projectile to the handheld device, and where the singular physical link tethers the projectile to the handheld device. 31. The system of claim 31, where the image data set is the combination image and where stitching the set of sub-combination images together occurs at the projectile. 32. The system of claim 31, where the image data set is the set of sub-combination images and where stitching the set of sub-combination images together occurs at the handheld device. 33. The system of claim 31, where the singular physical link is broken after the image set is transmitted from the projectile to the handheld device. 34. The system of claim 26, where the launched projectile captures the set of sub-combination images in response to identification of a travel condition of the launched projectile after launch from the handheld device. 35. A method, comprising: comparing, by way of a handheld device, a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together; identifying, by way of the handheld device, a position of the vicinity through use of the comparison result; and causing, by way of the handheld device, an information set that is indicative of the position to be disclosed on an interface of the handheld device. 36. A handheld device, that is at least partially hardware, comprising: an interface; a comparison component configured to compare a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together; an identification component configured to identify a position of the vicinity through use of the comparison result; and an interface component configured to cause an information set that is indicative of the position to be disclosed on the interface.
Various embodiments associated with a composite image are described. In one embodiment, a handheld device comprises a launch component configured to cause a launch of a projectile. The projectile is configured to capture a plurality of images. Individual images of the plurality of images are of different segments of an area. The system also comprises an image stitch component configured to stitch the plurality of images into a composite image. The composite image is of a higher resolution than a resolution of individual images of the plurality of images.1-16. (canceled) 17. A system, comprising: a processor; and a non-transitory computer-readable medium configured to store computer-executable instructions that when executed by the processor cause the processor to perform a method, the method comprising: comparing a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together; identifying a position of the vicinity through use of the comparison result; and causing an information set that is indicative of the position to be disclosed on an interface. 18. The system of claim 17, where the information set that is indicative of the position is displayed concurrently with the combination image on the interface. 19. The system of claim 17, where the set of sub-combination images are captured by a launched projectile. 20. The system of claim 17, where the information set is a command message set and where an individual command of the command message set causes a battle command action to occur, when selected, at a locality that is indicated by the location of the combination image. 21. The system of claim 17, where the interface discloses the combination image concurrently with the non-real-time image and where the combination image is displayed on top of the non-real-time image. 22. The system of claim 17, where the interface discloses the combination image concurrently with the non-real-time image and where the non-real-time image is displayed on top of the combination image. 23. The system of claim 17, the method comprising: identifying a common feature set between the combination image and the non-real-time image; aligning the combination image and the non-real-time image through use of the common feature set; and causing the combination image and the non-real-time image to be concurrently disclosed on the interface in an aligned manner, where the information set is disclosed in an aligned manner with the combination image and the non-real-time image. 24. The system of claim 18, where causing the information set that is indicative of the position to be disclosed on the interface occurs in response to a user request. 25. The system of claim 25, where the information set that is indicative of the position is displayed at a user-designated location on the interface. 26. The system of claim 19, where the interface is part of a handheld device and where the launched projectile is launched from the handheld device. 27. The system of claim 26, the method comprising: accessing the non-real-time image from the non-transitory computer-readable medium; where the non-transitory computer-readable medium is resident upon the handheld device and where the processor is resident upon the handheld device. 28. The system of claim 27, the method comprising: modifying the non-real-time image as retained in the non-transitory computer-readable medium in accordance with the combination image, where when a subsequent comparing occurs the m is used as a subsequent non-real-time image. 29. The system of claim 27, the method comprising: replacing the non-real-time image from the non-transitory computer-readable medium with the combination image, where when a subsequent comparing occurs the combination image is used as a subsequent non-real-time image. 30. The system of claim 27, where the handheld device is coupled to the projectile by way of a singular physical link, where the singular physical link is used to transmit an image set from the projectile to the handheld device, and where the singular physical link tethers the projectile to the handheld device. 31. The system of claim 31, where the image data set is the combination image and where stitching the set of sub-combination images together occurs at the projectile. 32. The system of claim 31, where the image data set is the set of sub-combination images and where stitching the set of sub-combination images together occurs at the handheld device. 33. The system of claim 31, where the singular physical link is broken after the image set is transmitted from the projectile to the handheld device. 34. The system of claim 26, where the launched projectile captures the set of sub-combination images in response to identification of a travel condition of the launched projectile after launch from the handheld device. 35. A method, comprising: comparing, by way of a handheld device, a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together; identifying, by way of the handheld device, a position of the vicinity through use of the comparison result; and causing, by way of the handheld device, an information set that is indicative of the position to be disclosed on an interface of the handheld device. 36. A handheld device, that is at least partially hardware, comprising: an interface; a comparison component configured to compare a combination image, which is a real-time image of a vicinity, against a non-real-time image of the vicinity to produce a comparison result, where the combination image results from stitching a set of sub-combination images together; an identification component configured to identify a position of the vicinity through use of the comparison result; and an interface component configured to cause an information set that is indicative of the position to be disclosed on the interface.
2,600
10,288
10,288
13,965,896
2,625
An optical navigation system and an optical navigation apparatus thereof are provided. The optical navigation system comprises a host and the optical navigation apparatus. The host detects a running program to generate a control signal. The optical navigation apparatus, which is connected to the host in a wireless way, receives the control signal and performs a performance configuration according to the control signal to adjust an image data amount that is required for processing.
1. An optical navigation system, comprising: a host, being configured to detect a running program to generate a control signal; and an optical navigation apparatus, connected to the host in a wireless way, being configured to receive the control signal and perform a performance configuration according to the control signal to adjust an image data amount of an image data required to be processed, wherein the image data is generated by the optical navigation apparatus based on images captured by the optical navigation apparatus when the optical navigation apparatus relatively moves across a working surface. 2. The optical navigation system as claimed in claim 1, wherein the optical navigation apparatus comprises an image capture module configured to capture the images, and the performance configuration is performed to set a frame rate for an image capture of the image capture module. 3. The optical navigation system as claimed in claim 2, wherein the control signal is a high performance signal and the optical navigation apparatus sets the frame rate to a high performance value according to the high performance signal to increase the image data amount. 4. The optical navigation system as claimed in claim 2, wherein the control signal is a low performance signal and the optical navigation apparatus sets the frame rate to a power saving value according to the low performance signal to decrease the image data amount. 5. The optical navigation system as claimed in claim 2, wherein the control signal is a normal performance signal and the optical navigation apparatus sets the frame rate to a default value according to the normal performance signal. 6. The optical navigation system as claimed in claim 1, wherein the optical navigation apparatus comprises an image capture module and a processor, the image capture module is configured to capture the images and is electrically connected to the processor, and the performance configuration is performed to set a precision of an image processing of the processor. 7. The optical navigation system as claimed in claim 6, wherein the control signal is a high performance signal and the optical navigation apparatus sets the precision to a high performance value according to the high performance signal to increase the image data amount. 8. The optical navigation system as claimed in claim 6, wherein the control signal is a low performance signal and the optical navigation apparatus sets the precision to a power saving value according to the low performance signal to decrease the image data amount. 9. The optical navigation system as claimed in claim 6, wherein the control signal is a normal performance signal and the optical navigation apparatus sets the precision to a default value according to the normal performance signal. 10. The optical navigation system of claim 1, wherein the optical navigation apparatus further receives an output voltage from a battery, detects the output voltage, and performs the performance configuration when the output voltage is lower than a threshold value to decrease the image data amount. 11. The optical navigation system as claimed in claim 10, wherein the optical navigation apparatus further disables at least one function key of the optical navigation apparatus when the output voltage is lower than the threshold value. 12. The optical navigation system as claimed in claim 1, wherein the optical navigation apparatus comprises a control key, and performs the performance configuration to adjust the image data amount according to a trigger signal generated by the control key . 13. An optical navigation apparatus, comprising: a transceiver, being configured to receive a control signal from a host in a wireless way, the control signal being generated by the host detecting a running program; an image capture module, being configured to capture images to generate an image data when the optical navigation apparatus relatively moves across a working surface; and a processor electrically connected to the transceiver and the image capture module, being configured to perform a performance configuration according to the control signal to adjust an image data amount of the image data required to be processed. 14. The optical navigation apparatus as claimed in claim 13, wherein the performance configuration is performed to set a frame rate for an image capture of the image capture module. 15. The optical navigation apparatus as claimed in claim 14, wherein the control signal is a high performance signal and the processor sets the frame rate to a high performance value according to the high performance signal to increase the image data amount. 16. The optical navigation apparatus as claimed in claim 14, wherein the control signal is a low performance signal and the processor sets the frame rate to a power saving value according to the low performance signal to decrease the image data amount. 17. The optical navigation apparatus as claimed in claim 14, wherein the control signal is a normal performance signal and the optical navigation apparatus sets the frame rate to a default value according to the normal performance signal. 18. The optical navigation apparatus as claimed in claim 13, wherein the performance configuration is performed to set a precision of an image processing of the processor. 19. The optical navigation apparatus as claimed in claim 18, wherein the control signal is a high performance signal and the processor sets the precision to a high performance value according to the high performance signal to increase the image data amount. 20. The optical navigation apparatus as claimed in claim 18, wherein the control signal is a low performance signal and the processor sets the precision to a power saving value according to the low performance signal to decrease the image data amount. 21. The optical navigation apparatus as claimed in claim 18, wherein the control signal is a normal performance signal and the processor sets the precision to a default value according to the normal performance signal. 22. The optical navigation apparatus as claimed in claim 13, further comprising a power supply module, wherein the power supply module connects to the processor and receives an output voltage of a battery, the power supply module detects the output voltage of the battery and generates a trigger signal when the output voltage is lower than a threshold value, and the processor further performs the performance configuration according to the trigger signal to decrease the image data amount. 23. The optical navigation apparatus as claimed in claim 22, further comprising at least one function key electrically connected to the processor, wherein the processor further disables the at least one function key according to the trigger signal. 24. The optical navigation apparatus as claimed in claim 13, further comprising a control key electrically connected to the processor, and the processor further performs the performance configuration to adjust the image data amount according to a trigger signal generated by the control key.
An optical navigation system and an optical navigation apparatus thereof are provided. The optical navigation system comprises a host and the optical navigation apparatus. The host detects a running program to generate a control signal. The optical navigation apparatus, which is connected to the host in a wireless way, receives the control signal and performs a performance configuration according to the control signal to adjust an image data amount that is required for processing.1. An optical navigation system, comprising: a host, being configured to detect a running program to generate a control signal; and an optical navigation apparatus, connected to the host in a wireless way, being configured to receive the control signal and perform a performance configuration according to the control signal to adjust an image data amount of an image data required to be processed, wherein the image data is generated by the optical navigation apparatus based on images captured by the optical navigation apparatus when the optical navigation apparatus relatively moves across a working surface. 2. The optical navigation system as claimed in claim 1, wherein the optical navigation apparatus comprises an image capture module configured to capture the images, and the performance configuration is performed to set a frame rate for an image capture of the image capture module. 3. The optical navigation system as claimed in claim 2, wherein the control signal is a high performance signal and the optical navigation apparatus sets the frame rate to a high performance value according to the high performance signal to increase the image data amount. 4. The optical navigation system as claimed in claim 2, wherein the control signal is a low performance signal and the optical navigation apparatus sets the frame rate to a power saving value according to the low performance signal to decrease the image data amount. 5. The optical navigation system as claimed in claim 2, wherein the control signal is a normal performance signal and the optical navigation apparatus sets the frame rate to a default value according to the normal performance signal. 6. The optical navigation system as claimed in claim 1, wherein the optical navigation apparatus comprises an image capture module and a processor, the image capture module is configured to capture the images and is electrically connected to the processor, and the performance configuration is performed to set a precision of an image processing of the processor. 7. The optical navigation system as claimed in claim 6, wherein the control signal is a high performance signal and the optical navigation apparatus sets the precision to a high performance value according to the high performance signal to increase the image data amount. 8. The optical navigation system as claimed in claim 6, wherein the control signal is a low performance signal and the optical navigation apparatus sets the precision to a power saving value according to the low performance signal to decrease the image data amount. 9. The optical navigation system as claimed in claim 6, wherein the control signal is a normal performance signal and the optical navigation apparatus sets the precision to a default value according to the normal performance signal. 10. The optical navigation system of claim 1, wherein the optical navigation apparatus further receives an output voltage from a battery, detects the output voltage, and performs the performance configuration when the output voltage is lower than a threshold value to decrease the image data amount. 11. The optical navigation system as claimed in claim 10, wherein the optical navigation apparatus further disables at least one function key of the optical navigation apparatus when the output voltage is lower than the threshold value. 12. The optical navigation system as claimed in claim 1, wherein the optical navigation apparatus comprises a control key, and performs the performance configuration to adjust the image data amount according to a trigger signal generated by the control key . 13. An optical navigation apparatus, comprising: a transceiver, being configured to receive a control signal from a host in a wireless way, the control signal being generated by the host detecting a running program; an image capture module, being configured to capture images to generate an image data when the optical navigation apparatus relatively moves across a working surface; and a processor electrically connected to the transceiver and the image capture module, being configured to perform a performance configuration according to the control signal to adjust an image data amount of the image data required to be processed. 14. The optical navigation apparatus as claimed in claim 13, wherein the performance configuration is performed to set a frame rate for an image capture of the image capture module. 15. The optical navigation apparatus as claimed in claim 14, wherein the control signal is a high performance signal and the processor sets the frame rate to a high performance value according to the high performance signal to increase the image data amount. 16. The optical navigation apparatus as claimed in claim 14, wherein the control signal is a low performance signal and the processor sets the frame rate to a power saving value according to the low performance signal to decrease the image data amount. 17. The optical navigation apparatus as claimed in claim 14, wherein the control signal is a normal performance signal and the optical navigation apparatus sets the frame rate to a default value according to the normal performance signal. 18. The optical navigation apparatus as claimed in claim 13, wherein the performance configuration is performed to set a precision of an image processing of the processor. 19. The optical navigation apparatus as claimed in claim 18, wherein the control signal is a high performance signal and the processor sets the precision to a high performance value according to the high performance signal to increase the image data amount. 20. The optical navigation apparatus as claimed in claim 18, wherein the control signal is a low performance signal and the processor sets the precision to a power saving value according to the low performance signal to decrease the image data amount. 21. The optical navigation apparatus as claimed in claim 18, wherein the control signal is a normal performance signal and the processor sets the precision to a default value according to the normal performance signal. 22. The optical navigation apparatus as claimed in claim 13, further comprising a power supply module, wherein the power supply module connects to the processor and receives an output voltage of a battery, the power supply module detects the output voltage of the battery and generates a trigger signal when the output voltage is lower than a threshold value, and the processor further performs the performance configuration according to the trigger signal to decrease the image data amount. 23. The optical navigation apparatus as claimed in claim 22, further comprising at least one function key electrically connected to the processor, wherein the processor further disables the at least one function key according to the trigger signal. 24. The optical navigation apparatus as claimed in claim 13, further comprising a control key electrically connected to the processor, and the processor further performs the performance configuration to adjust the image data amount according to a trigger signal generated by the control key.
2,600
10,289
10,289
15,419,018
2,669
Systems and methods are directed to computer-assisted navigation of images of a cytological specimen, including the acts of analyzing a first image of the cytological specimen to identify a plurality of objects of interest within the cytological specimen, displaying a plurality of images each comprising one of the plurality of identified objects of interest within the cytological specimen and including a second image of at least one object of interest, and displaying, responsive to receiving a user selection of the second image of the at least one object of interest, a field of view of the at least one object of interest and neighboring objects of interest, wherein the second image of the at least one object of interest has a first magnification, and wherein the field of view of the at least one object of interest is displayed at a second magnification different than the first magnification.
1. A computer-assisted method of navigating images of a cytological specimen, comprising the acts of: analyzing a first image of the cytological specimen to identify a plurality of objects of interest within the cytological specimen; displaying a plurality of images each comprising one of the plurality of identified objects of interest within the cytological specimen and including a second image of at least one object of interest; and displaying, responsive to receiving a user selection of the second image of the at least one object of interest, a field of view of the at least one object of interest and neighboring objects of interest, wherein the second image of the at least one object of interest has a first magnification, and wherein the field of view of the at least one object of interest is displayed at a second magnification different than the first magnification. 2. The method of claim 1, wherein the second image of the at least one object of interest is displayed in a scroll bar. 3. The method of claim 1, further comprising acts of: receiving, from a user, a classification of the at least one object of interest; and storing the classification of the at least one object of interest with the second image of the at least one object of interest. 4. The method of claim 1, wherein the field of view is accessed from a database of cytological specimen images. 5. The method of claim 1, wherein the plurality of images is sorted by a probability of having one or more predetermined characteristics. 6. The method of claim 5, wherein the one or more predetermined characteristics include one or more of morphological characteristics, a stain type, a cell size, a nucleus-to-cytoplasm ratio, an optical density, a regularity of contour, color-based criteria, and a nucleic density. 7. The method of claim 6, wherein the stain type is at least one of chromogenic and fluorescent. 8. The method of claim 5, wherein the one or more predetermined characteristics are selected by a user. 9. The method of claim 1, further comprising receiving a selection of a magnification value for the second magnification. 10. The method of claim 1, wherein the plurality of objects includes at least one of a plurality of cells and a plurality of cell clusters. 11. A system for inspecting images of a cytological specimen, the system comprising: at least one processor operatively coupled to a memory; a user interface display; and a user interface component, executed by the at least one processor, configured to: analyze a first image of the cytological specimen to identify a plurality of objects of interest within the cytological specimen; display a plurality of images each comprising one of the plurality of identified objects of interest within the cytological specimen and including a second image of at least one object of interest; and display, responsive to receiving a user selection of the second image of the at least one object of interest, a field of view of the at least one object of interest and neighboring objects of interest, wherein the second image of the at least one object of interest has a first magnification, and wherein the field of view of the at least one object of interest has a second magnification different than the first magnification. 12. The system of claim 11, wherein the user interface component is further configured to display the second image of at least one object of interest in a scroll bar. 13. The system of claim 11, wherein the user interface component is further configured to: receive a classification of the at least one object of interest; and store the classification of the at least one object of interest with the second image of the at least one object of interest. 14. The system of claim 11, wherein the field of view can be accessed from a database of cytological specimen images. 15. The system of claim 11, wherein the user interface component is configured to sort the plurality of images by a probability of having one or more predetermined characteristics. 16. The system of claim 15, wherein the predetermined characteristics include one or more of morphological characteristics, a stain type, a cell size, a nucleus-to-cytoplasm ratio, an optical density, a regularity of contour, color-based criteria, and a nucleic density. 17. The system of claim 16, wherein the stain type is at least one of chromogenic and fluorescent. 18. The system of claim 15, wherein the user interface component is configured to receive the one or more predetermined characteristics from a user. 19. The system of claim 11, wherein the user interface component is configured to receive a magnification value of the second magnification from a user. 20. The system of claim 11, wherein the plurality of objects includes at least one of a plurality of cells and a plurality of cell clusters.
Systems and methods are directed to computer-assisted navigation of images of a cytological specimen, including the acts of analyzing a first image of the cytological specimen to identify a plurality of objects of interest within the cytological specimen, displaying a plurality of images each comprising one of the plurality of identified objects of interest within the cytological specimen and including a second image of at least one object of interest, and displaying, responsive to receiving a user selection of the second image of the at least one object of interest, a field of view of the at least one object of interest and neighboring objects of interest, wherein the second image of the at least one object of interest has a first magnification, and wherein the field of view of the at least one object of interest is displayed at a second magnification different than the first magnification.1. A computer-assisted method of navigating images of a cytological specimen, comprising the acts of: analyzing a first image of the cytological specimen to identify a plurality of objects of interest within the cytological specimen; displaying a plurality of images each comprising one of the plurality of identified objects of interest within the cytological specimen and including a second image of at least one object of interest; and displaying, responsive to receiving a user selection of the second image of the at least one object of interest, a field of view of the at least one object of interest and neighboring objects of interest, wherein the second image of the at least one object of interest has a first magnification, and wherein the field of view of the at least one object of interest is displayed at a second magnification different than the first magnification. 2. The method of claim 1, wherein the second image of the at least one object of interest is displayed in a scroll bar. 3. The method of claim 1, further comprising acts of: receiving, from a user, a classification of the at least one object of interest; and storing the classification of the at least one object of interest with the second image of the at least one object of interest. 4. The method of claim 1, wherein the field of view is accessed from a database of cytological specimen images. 5. The method of claim 1, wherein the plurality of images is sorted by a probability of having one or more predetermined characteristics. 6. The method of claim 5, wherein the one or more predetermined characteristics include one or more of morphological characteristics, a stain type, a cell size, a nucleus-to-cytoplasm ratio, an optical density, a regularity of contour, color-based criteria, and a nucleic density. 7. The method of claim 6, wherein the stain type is at least one of chromogenic and fluorescent. 8. The method of claim 5, wherein the one or more predetermined characteristics are selected by a user. 9. The method of claim 1, further comprising receiving a selection of a magnification value for the second magnification. 10. The method of claim 1, wherein the plurality of objects includes at least one of a plurality of cells and a plurality of cell clusters. 11. A system for inspecting images of a cytological specimen, the system comprising: at least one processor operatively coupled to a memory; a user interface display; and a user interface component, executed by the at least one processor, configured to: analyze a first image of the cytological specimen to identify a plurality of objects of interest within the cytological specimen; display a plurality of images each comprising one of the plurality of identified objects of interest within the cytological specimen and including a second image of at least one object of interest; and display, responsive to receiving a user selection of the second image of the at least one object of interest, a field of view of the at least one object of interest and neighboring objects of interest, wherein the second image of the at least one object of interest has a first magnification, and wherein the field of view of the at least one object of interest has a second magnification different than the first magnification. 12. The system of claim 11, wherein the user interface component is further configured to display the second image of at least one object of interest in a scroll bar. 13. The system of claim 11, wherein the user interface component is further configured to: receive a classification of the at least one object of interest; and store the classification of the at least one object of interest with the second image of the at least one object of interest. 14. The system of claim 11, wherein the field of view can be accessed from a database of cytological specimen images. 15. The system of claim 11, wherein the user interface component is configured to sort the plurality of images by a probability of having one or more predetermined characteristics. 16. The system of claim 15, wherein the predetermined characteristics include one or more of morphological characteristics, a stain type, a cell size, a nucleus-to-cytoplasm ratio, an optical density, a regularity of contour, color-based criteria, and a nucleic density. 17. The system of claim 16, wherein the stain type is at least one of chromogenic and fluorescent. 18. The system of claim 15, wherein the user interface component is configured to receive the one or more predetermined characteristics from a user. 19. The system of claim 11, wherein the user interface component is configured to receive a magnification value of the second magnification from a user. 20. The system of claim 11, wherein the plurality of objects includes at least one of a plurality of cells and a plurality of cell clusters.
2,600
10,290
10,290
14,401,438
2,647
A method, an apparatus and a corresponding computer program product are proposed. Wherein one of the method according to one embodiment of the invention determines at least one cell in a network sharing context of at least one user equipment with at least one other cell within the network. Then creates at least one measurement configuration being used by the at least one user equipment, wherein the at least one measurement configuration indicates at least to the at least one user equipment that it can decide whether to perform cell change without transmitting back its measurement report to its serving cell. Then method further transmits the created at least one measurement configuration to the at least one user equipment.
1-35. (canceled) 36. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least: determine at least one cell in a network which shares context of at least one user equipment with at least one other cell within the network; create at least one measurement configuration being used by the at least one user equipment, wherein the at least one measurement configuration indicates at least to the at least one user equipment that it can decide whether to perform a cell change without transmitting back a measurement report to its serving cell; and transmit the created at least one measurement configuration to the at least one user equipment. 37. The apparatus according to claim 36, wherein the created at least one measurement configuration indicates to the at least one user equipment such enabled self decision on cell change by a specific pre-defined identifier and/or by any different configurations compared with measurement configurations of a user equipment assisted and network controlled handover procedure. 38. The apparatus according to claim 36, wherein the apparatus is configured to perform measurement at any appropriate frequencies comprising at least one of inter-frequency, intra-frequency and inter radio access technologies. 39. The apparatus according to claim 36, wherein the created measurement configuration further comprises information indicating at least one of following: at least one neighbor cell of said at least one cell which shares the context with at least one other cell; at least one frequency to be measured; at least one measurement event; and at least one event related parameter including threshold value and/or measurement gap. 40. The apparatus according to claim 36, wherein the network is a heterogeneous network in which at least one small cell with a relatively smaller coverage is located adjacent to or within at least one of said at least one cell, and the created measurement configuration is to be used by the at least one user equipment when moving between said at least one cell and said at least one small cell, moving between the small cells, or moving between the cells. 41. The apparatus according to claim 40, wherein if said at least one small cell is located within said at least one cell, an access node of said at least one cell provides access service to an access node of said at least one small cell. 42. A method, comprising: maintaining at a user equipment at least one measurement configuration created at network side, wherein at least one cell in the network shares context of the user equipment with at least one other cell within the network, and the at least one measurement configuration indicates to the user equipment that it can decide whether to perform cell change without transmitting back a measurement report to its serving cell; performing at least one measurement based on the maintained at least one measurement configuration to check if there is any other cell within the network providing better signal quality than its current serving cell; and performing cell change directly without transmitting a measurement report back to the current serving cell, if the target cell provides better signal quality than the current serving cell. 43. The method according to claim 42, wherein the maintained at least one measurement configuration indicates to the user equipment such enabled self decision on cell change by a specific identifier and/or by any different configurations compared with measurement configurations of a user equipment assisted and network controlled handover procedure. 44. The method according to claim 42, wherein the maintained at least one measurement configuration further comprises information indicating at least one of following: at least one neighbor cell relative to the current serving cell of said at least one cell which shares the context with at least one other cell; at least one frequency to be measured; at least one measurement event; and at least one event related parameter including threshold value and/or measurement gap. 45. The method according to claim 42, wherein the user equipment further maintains at least one normal measurement configuration of a user equipment assisted and network controlled handover procedure, and the method further comprising steps of: determining, at least before performing cell change, whether an obtained measurement is performed according to the normal measurement configuration; and if it is performed according to the normal measurement configuration, transmitting the obtained measurement report to its current serving cell instead of performing cell change directly. 46. The method according to claim 45, wherein the type of the maintained measurement configurations is determined based on a specific identifier and/or any different configurations between the maintained measurement configuration created at the heterogeneous network and the normal measurement configuration of the user equipment assisted and network controlled handover procedure. 47. The method according to claim 42, wherein the network is a heterogeneous network in which at least one small cell with a relatively smaller coverage is located adjacent to or within at least one of said at least one cell, and the at least one measurement can be performed by the user equipment on a received signal when the user equipment moving between said at least one cell and said at least one small cell, moving between the small cells, or moving between the cells. 48. The method according to f claim 42, wherein the at least one measurement can be performed at any appropriate frequencies comprising at least one of inter-frequency, intra-frequency and inter radio access technologies; and/or if said at least one small cell is located within said at least one cell, an access node of said at least one cell provides access service to an access node of said at least one small cell. 49. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least: maintain at least one measurement configuration created at network side, wherein at least one cell in the network shares context of the apparatus with at least one other cell within the network, and the at least one measurement configuration indicates to the apparatus that it can decide whether to perform a cell change without transmitting back a measurement report to its serving cell; perform at least one measurement based on the maintained at least one measurement configuration to check if there is any other cell within the network providing better signal quality than its current serving cell; and perform cell change directly without transmitting a measurement report back to its current serving cell, if the target cell provides better signal quality than the current serving cell. 50. The apparatus according to claim 49, wherein the maintained at least one measurement configuration indicates to the apparatus such enabled self decision on cell change by a specific identifier and/or by any different configurations compared with measurement configurations of a user equipment assisted and network controlled handover procedure. 51. The apparatus according to claim 49, wherein the maintained at least one measurement configuration further comprises information indicating at least one of following: at least one neighbor cell relative to the current serving cell of said at least one cell which shares the context with at least one other cell; at least one frequency to be measured; at least one measurement event; and at least one event related parameter including threshold value and/or measurement gap. 52. The apparatus according to claim 49, wherein the apparatus further maintains at least one normal measurement configuration of a user equipment assisted and network controlled handover procedure, and the apparatus further configured to: determine, at least before performing cell change, whether an obtained measurement is performed according to the normal measurement configuration; and if it is performed according to the normal measurement configuration, transmit the obtained measurement report to its current serving cell instead of performing a cell change directly. 53. The apparatus according to claim 52, wherein the type of the maintained measurement configurations is determined based on a specific identifier and/or any different configurations between the maintained measurement configuration created at the heterogeneous network and the normal measurement configuration of the user equipment assisted and network controlled handover procedure. 54. The apparatus according to claim 49, wherein the network is a heterogeneous network in which at least one small cell with a relatively smaller coverage is located adjacent to or within at least one of said at least one cell, and the at least one measurement can be performed by the apparatus on received signal when the apparatus moving between said at least one cell and said at least one small cell, moving between the small cells, or moving between the cells. 55. The apparatus according to claim 49, wherein the maintained at least one measurement configuration is either pre-loaded into the apparatus or received from the network after the apparatus is powered on; and/or if a condition of a measurement event defined in the maintained at least one measurement configuration is met.
A method, an apparatus and a corresponding computer program product are proposed. Wherein one of the method according to one embodiment of the invention determines at least one cell in a network sharing context of at least one user equipment with at least one other cell within the network. Then creates at least one measurement configuration being used by the at least one user equipment, wherein the at least one measurement configuration indicates at least to the at least one user equipment that it can decide whether to perform cell change without transmitting back its measurement report to its serving cell. Then method further transmits the created at least one measurement configuration to the at least one user equipment.1-35. (canceled) 36. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least: determine at least one cell in a network which shares context of at least one user equipment with at least one other cell within the network; create at least one measurement configuration being used by the at least one user equipment, wherein the at least one measurement configuration indicates at least to the at least one user equipment that it can decide whether to perform a cell change without transmitting back a measurement report to its serving cell; and transmit the created at least one measurement configuration to the at least one user equipment. 37. The apparatus according to claim 36, wherein the created at least one measurement configuration indicates to the at least one user equipment such enabled self decision on cell change by a specific pre-defined identifier and/or by any different configurations compared with measurement configurations of a user equipment assisted and network controlled handover procedure. 38. The apparatus according to claim 36, wherein the apparatus is configured to perform measurement at any appropriate frequencies comprising at least one of inter-frequency, intra-frequency and inter radio access technologies. 39. The apparatus according to claim 36, wherein the created measurement configuration further comprises information indicating at least one of following: at least one neighbor cell of said at least one cell which shares the context with at least one other cell; at least one frequency to be measured; at least one measurement event; and at least one event related parameter including threshold value and/or measurement gap. 40. The apparatus according to claim 36, wherein the network is a heterogeneous network in which at least one small cell with a relatively smaller coverage is located adjacent to or within at least one of said at least one cell, and the created measurement configuration is to be used by the at least one user equipment when moving between said at least one cell and said at least one small cell, moving between the small cells, or moving between the cells. 41. The apparatus according to claim 40, wherein if said at least one small cell is located within said at least one cell, an access node of said at least one cell provides access service to an access node of said at least one small cell. 42. A method, comprising: maintaining at a user equipment at least one measurement configuration created at network side, wherein at least one cell in the network shares context of the user equipment with at least one other cell within the network, and the at least one measurement configuration indicates to the user equipment that it can decide whether to perform cell change without transmitting back a measurement report to its serving cell; performing at least one measurement based on the maintained at least one measurement configuration to check if there is any other cell within the network providing better signal quality than its current serving cell; and performing cell change directly without transmitting a measurement report back to the current serving cell, if the target cell provides better signal quality than the current serving cell. 43. The method according to claim 42, wherein the maintained at least one measurement configuration indicates to the user equipment such enabled self decision on cell change by a specific identifier and/or by any different configurations compared with measurement configurations of a user equipment assisted and network controlled handover procedure. 44. The method according to claim 42, wherein the maintained at least one measurement configuration further comprises information indicating at least one of following: at least one neighbor cell relative to the current serving cell of said at least one cell which shares the context with at least one other cell; at least one frequency to be measured; at least one measurement event; and at least one event related parameter including threshold value and/or measurement gap. 45. The method according to claim 42, wherein the user equipment further maintains at least one normal measurement configuration of a user equipment assisted and network controlled handover procedure, and the method further comprising steps of: determining, at least before performing cell change, whether an obtained measurement is performed according to the normal measurement configuration; and if it is performed according to the normal measurement configuration, transmitting the obtained measurement report to its current serving cell instead of performing cell change directly. 46. The method according to claim 45, wherein the type of the maintained measurement configurations is determined based on a specific identifier and/or any different configurations between the maintained measurement configuration created at the heterogeneous network and the normal measurement configuration of the user equipment assisted and network controlled handover procedure. 47. The method according to claim 42, wherein the network is a heterogeneous network in which at least one small cell with a relatively smaller coverage is located adjacent to or within at least one of said at least one cell, and the at least one measurement can be performed by the user equipment on a received signal when the user equipment moving between said at least one cell and said at least one small cell, moving between the small cells, or moving between the cells. 48. The method according to f claim 42, wherein the at least one measurement can be performed at any appropriate frequencies comprising at least one of inter-frequency, intra-frequency and inter radio access technologies; and/or if said at least one small cell is located within said at least one cell, an access node of said at least one cell provides access service to an access node of said at least one small cell. 49. An apparatus, comprising: at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least: maintain at least one measurement configuration created at network side, wherein at least one cell in the network shares context of the apparatus with at least one other cell within the network, and the at least one measurement configuration indicates to the apparatus that it can decide whether to perform a cell change without transmitting back a measurement report to its serving cell; perform at least one measurement based on the maintained at least one measurement configuration to check if there is any other cell within the network providing better signal quality than its current serving cell; and perform cell change directly without transmitting a measurement report back to its current serving cell, if the target cell provides better signal quality than the current serving cell. 50. The apparatus according to claim 49, wherein the maintained at least one measurement configuration indicates to the apparatus such enabled self decision on cell change by a specific identifier and/or by any different configurations compared with measurement configurations of a user equipment assisted and network controlled handover procedure. 51. The apparatus according to claim 49, wherein the maintained at least one measurement configuration further comprises information indicating at least one of following: at least one neighbor cell relative to the current serving cell of said at least one cell which shares the context with at least one other cell; at least one frequency to be measured; at least one measurement event; and at least one event related parameter including threshold value and/or measurement gap. 52. The apparatus according to claim 49, wherein the apparatus further maintains at least one normal measurement configuration of a user equipment assisted and network controlled handover procedure, and the apparatus further configured to: determine, at least before performing cell change, whether an obtained measurement is performed according to the normal measurement configuration; and if it is performed according to the normal measurement configuration, transmit the obtained measurement report to its current serving cell instead of performing a cell change directly. 53. The apparatus according to claim 52, wherein the type of the maintained measurement configurations is determined based on a specific identifier and/or any different configurations between the maintained measurement configuration created at the heterogeneous network and the normal measurement configuration of the user equipment assisted and network controlled handover procedure. 54. The apparatus according to claim 49, wherein the network is a heterogeneous network in which at least one small cell with a relatively smaller coverage is located adjacent to or within at least one of said at least one cell, and the at least one measurement can be performed by the apparatus on received signal when the apparatus moving between said at least one cell and said at least one small cell, moving between the small cells, or moving between the cells. 55. The apparatus according to claim 49, wherein the maintained at least one measurement configuration is either pre-loaded into the apparatus or received from the network after the apparatus is powered on; and/or if a condition of a measurement event defined in the maintained at least one measurement configuration is met.
2,600
10,291
10,291
15,309,296
2,696
A device has a camera and a screen for displaying an image captured by the camera. A user specifies desired modification to an image displayed on the screen to produce a desired modified image. This may for example make the user look more attractive. Required light output characteristics of a lighting device are then derived so that subsequent captured images using the altered lighting are closer to the desired modified image.
1. A device comprising: a camera; a screen for displaying an image captured by the camera; a user input interface; and a processor; wherein the processor is adapted to: receive a user input representing a desired modification to the image displayed on the screen showing a scene; and derive required light output characteristics of a lighting device to change the illumination to the scene such that subsequent captured images reflect the desired modification. 2. A device as claimed in claim 1, wherein the processor is further configured to: use the user input to produce a desired modified image; and derive the required light output characteristics of a lighting device to change the illumination to the scene based on difference between the captured image and the desired modified image such that subsequent captured images are closer to the desired modified image. 3. A device as claimed in claim 1, wherein the modification relates to image properties such as hue, saturation and luminance. 4. A device as claimed in claim 1, wherein the processor is adapted to receive a user input identifying a selected one of a set of modified images. 5. A device as claimed in claim 4, wherein the processor is adapted to derive a difference between a metric representing the selected modified image and a metric representing the original image, and derive the required light output from the difference. 6. A device as claimed in claim 1, comprising a portable device, such as a mobile phone or tablet, wherein the lighting device comprises a camera flash of the portable device. 7. A device as claimed in claim 1, wherein the subsequent captured images are static or form a video sequence. 8. A device as claimed in claim 7, wherein the captured image and the subsequent captured images include the face of a user and the processor is further adapted to: perform a face recognition function; derive a metric representing the pixel properties for the face area of the image and for the face area of the desired modified image; and derive the light output characteristics from a difference between the metrics. 9. A method of controlling lighting using a device which has a camera and a screen the method comprising: capturing an image of a scene using the camera of the device; displaying the image on the screen of the device; receiving a user input representing a desired modification to the displayed image to produce a desired modified image; deriving required light output characteristics of a lighting device to change the illumination to the scene such that subsequent captured images are closer to the desired modified image; and controlling the lighting device to output the required light output characteristics. 10. A method as claimed in claim 9, wherein displaying the image on the screen comprises displaying a set of modified images, and receiving the user input comprises receiving selection of one of the set of modified images. 11. A method as claimed in claim 9, wherein deriving a required light output comprises deriving a difference between a metric representing the selected modified image and the a metric representing the original image, and deriving the required light output from the difference. 12. A method as claimed in claim 11, comprising: performing a face recognition function; deriving a metric representing the pixel properties for the face area of the image and for the face area of the desired modified image; and deriving the light output characteristics from a difference between the metrics. 13. A method as claimed claim 9, wherein the device comprises a portable device such as a mobile phone or a tablet, and controlling the lighting device comprises controlling a camera flash of the portable device. 14. Computer program product downloadable from a communication network and/or stored on a computer-readable and/or microprocessor-executable medium, characterized in that it comprises program code instructions for implementing a method for controlling lighting using a device which has a camera and a screen according to claim 9 when said program is run on a computer. 15. A medium for storing and comprising the computer program product as defined in claim 14.
A device has a camera and a screen for displaying an image captured by the camera. A user specifies desired modification to an image displayed on the screen to produce a desired modified image. This may for example make the user look more attractive. Required light output characteristics of a lighting device are then derived so that subsequent captured images using the altered lighting are closer to the desired modified image.1. A device comprising: a camera; a screen for displaying an image captured by the camera; a user input interface; and a processor; wherein the processor is adapted to: receive a user input representing a desired modification to the image displayed on the screen showing a scene; and derive required light output characteristics of a lighting device to change the illumination to the scene such that subsequent captured images reflect the desired modification. 2. A device as claimed in claim 1, wherein the processor is further configured to: use the user input to produce a desired modified image; and derive the required light output characteristics of a lighting device to change the illumination to the scene based on difference between the captured image and the desired modified image such that subsequent captured images are closer to the desired modified image. 3. A device as claimed in claim 1, wherein the modification relates to image properties such as hue, saturation and luminance. 4. A device as claimed in claim 1, wherein the processor is adapted to receive a user input identifying a selected one of a set of modified images. 5. A device as claimed in claim 4, wherein the processor is adapted to derive a difference between a metric representing the selected modified image and a metric representing the original image, and derive the required light output from the difference. 6. A device as claimed in claim 1, comprising a portable device, such as a mobile phone or tablet, wherein the lighting device comprises a camera flash of the portable device. 7. A device as claimed in claim 1, wherein the subsequent captured images are static or form a video sequence. 8. A device as claimed in claim 7, wherein the captured image and the subsequent captured images include the face of a user and the processor is further adapted to: perform a face recognition function; derive a metric representing the pixel properties for the face area of the image and for the face area of the desired modified image; and derive the light output characteristics from a difference between the metrics. 9. A method of controlling lighting using a device which has a camera and a screen the method comprising: capturing an image of a scene using the camera of the device; displaying the image on the screen of the device; receiving a user input representing a desired modification to the displayed image to produce a desired modified image; deriving required light output characteristics of a lighting device to change the illumination to the scene such that subsequent captured images are closer to the desired modified image; and controlling the lighting device to output the required light output characteristics. 10. A method as claimed in claim 9, wherein displaying the image on the screen comprises displaying a set of modified images, and receiving the user input comprises receiving selection of one of the set of modified images. 11. A method as claimed in claim 9, wherein deriving a required light output comprises deriving a difference between a metric representing the selected modified image and the a metric representing the original image, and deriving the required light output from the difference. 12. A method as claimed in claim 11, comprising: performing a face recognition function; deriving a metric representing the pixel properties for the face area of the image and for the face area of the desired modified image; and deriving the light output characteristics from a difference between the metrics. 13. A method as claimed claim 9, wherein the device comprises a portable device such as a mobile phone or a tablet, and controlling the lighting device comprises controlling a camera flash of the portable device. 14. Computer program product downloadable from a communication network and/or stored on a computer-readable and/or microprocessor-executable medium, characterized in that it comprises program code instructions for implementing a method for controlling lighting using a device which has a camera and a screen according to claim 9 when said program is run on a computer. 15. A medium for storing and comprising the computer program product as defined in claim 14.
2,600
10,292
10,292
14,488,336
2,696
Provided is an information processing apparatus including: a wireless communication unit that performs communication between the information processing apparatus and an imaging apparatus using short-range wireless communication; and a control unit that determines whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication.
1. An information processing apparatus comprising: a wireless communication unit that performs communication between the information processing apparatus and an imaging apparatus using short-range wireless communication; and a control unit that determines whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication. 2. The information processing apparatus according to claim 1, wherein the control unit transmits a command for polling, and wherein if a response to the command is received, the control unit determines that the imaging apparatus is mounted, and if the response to the command is not received, the control unit determines that the imaging apparatus is not mounted. 3. The information processing apparatus according to claim 2, wherein if the response to the command is received, the control unit transmits a check command for reading information relating to the imaging apparatus, and wherein only if specific information for specifying the imaging apparatus is included in the response to the check command, the control unit determines that the imaging apparatus is mounted. 4. The information processing apparatus according to claim 1, wherein the wireless communication unit performs the communication between the information processing apparatus and the imaging apparatus using near field communication (NFC) as the short-range wireless communication, and wherein only if a polling response is received as the response to the transmitted polling command, the control unit determines that the imaging apparatus is mounted, and if the response to the polling command is not received, the control unit determines that the imaging apparatus is not mounted. 5. The information processing apparatus according to claim 4, wherein if the polling response is received, the control unit transmits a check command, and wherein only if specific information for specifying the imaging apparatus is included in a check response that is received as a response to the check command, the control unit determines that the imaging apparatus is mounted. 6. An imaging apparatus comprising: a wireless communication unit that performs communication with the imaging apparatus and an information processing apparatus using short-range wireless communication; and a control unit that performs control relating to an imaging operation based on an operation input that is performed in the information processing apparatus that determines whether or not the imaging apparatus is mounted on the information processing apparatus based on a result of the communication between the imaging apparatus and the information processing apparatus that uses the short-range wireless communication. 7. The imaging apparatus according to claim 6, wherein if a command for polling is received from the information processing apparatus, the control unit transmits a response to the command, and wherein if the response to the command is received, the information processing apparatus determines that the imaging apparatus is mounted, and if the response to the command is not present, the information processing apparatus determines that the imaging apparatus is not mounted. 8. The imaging apparatus according to claim 7, wherein the control unit includes specific information for specifying the imaging apparatus in a response to a check command for reading information relating to the imaging apparatus and thus transmits the response, and wherein if the response to the command is received, the information processing apparatus transmits the check command, and only if the specific information is included in the response to the check command, the information processing apparatus determines that the imaging apparatus is mounted. 9. The imaging apparatus according to claim 6. wherein the control unit transmits a polling response as a response to a polling command that is transmitted by the information processing apparatus using NFC as the short-range wireless communication, and wherein if the polling response is received as the response to the polling command, the information processing apparatus determines that the imaging apparatus is mounted, and if the response to the polling command is not present, the information processing apparatus determines that the imaging apparatus is not mounted. 10. The imaging apparatus according to claim 9, wherein the control unit includes specific information for specifying the imaging apparatus in a check response that is a response to a check command that is transmitted by the information processing apparatus and thus transmits the check response, and wherein if the polling response is received, the information processing apparatus transmits the check command, and only if the specific information is included in the check response that is received as the response to the check command, the information processing apparatus determines that the imaging apparatus is mounted. 11. An information processing apparatus comprising: a wireless communication unit that performs communication between the information processing apparatus and a different information processing apparatus using short-range wireless communication; and a control unit that determines whether or not the different information processing apparatus is mounted, based on a result of the communication with the different information processing apparatus that uses the short-range wireless communication. 12. An imaging system comprising: an imaging apparatus that performs communication between the imaging apparatus and an information processing apparatus using short-range wireless communication; and an information processing apparatus that determines whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication. 13. A method of controlling an information processing apparatus, comprising: performing communication between the information processing apparatus and an imaging apparatus using short-range wireless communication; and determining whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication. 14. A method of controlling an imaging apparatus, comprising: performing communication between the imaging apparatus and an information processing apparatus using short-range wireless communication; and performing control relating to an imaging operation based on an operation input that is performed in the information processing apparatus that determines whether or not the imaging apparatus is mounted on the information processing apparatus, based on a result of the communication between the imaging apparatus and the information processing apparatus that uses the short-range wireless communication. 15. A program for causing a computer to perform: communication between the computer and an imaging apparatus using short-range wireless communication; and determination of whether or not the imaging apparatus is mounted, based a result of the communication with the imaging apparatus that uses the short-range wireless communication. 16. A program for causing a computer to perform: communication between the computer and an information processing apparatus using short-range wireless communication; and control relating to an imaging operation based on an operation input that is performed in the information processing apparatus that determines whether or not an imaging apparatus is mounted on the information processing apparatus based on a result of the communication between the imaging apparatus and the information processing apparatus that uses the short-range wireless communication.
Provided is an information processing apparatus including: a wireless communication unit that performs communication between the information processing apparatus and an imaging apparatus using short-range wireless communication; and a control unit that determines whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication.1. An information processing apparatus comprising: a wireless communication unit that performs communication between the information processing apparatus and an imaging apparatus using short-range wireless communication; and a control unit that determines whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication. 2. The information processing apparatus according to claim 1, wherein the control unit transmits a command for polling, and wherein if a response to the command is received, the control unit determines that the imaging apparatus is mounted, and if the response to the command is not received, the control unit determines that the imaging apparatus is not mounted. 3. The information processing apparatus according to claim 2, wherein if the response to the command is received, the control unit transmits a check command for reading information relating to the imaging apparatus, and wherein only if specific information for specifying the imaging apparatus is included in the response to the check command, the control unit determines that the imaging apparatus is mounted. 4. The information processing apparatus according to claim 1, wherein the wireless communication unit performs the communication between the information processing apparatus and the imaging apparatus using near field communication (NFC) as the short-range wireless communication, and wherein only if a polling response is received as the response to the transmitted polling command, the control unit determines that the imaging apparatus is mounted, and if the response to the polling command is not received, the control unit determines that the imaging apparatus is not mounted. 5. The information processing apparatus according to claim 4, wherein if the polling response is received, the control unit transmits a check command, and wherein only if specific information for specifying the imaging apparatus is included in a check response that is received as a response to the check command, the control unit determines that the imaging apparatus is mounted. 6. An imaging apparatus comprising: a wireless communication unit that performs communication with the imaging apparatus and an information processing apparatus using short-range wireless communication; and a control unit that performs control relating to an imaging operation based on an operation input that is performed in the information processing apparatus that determines whether or not the imaging apparatus is mounted on the information processing apparatus based on a result of the communication between the imaging apparatus and the information processing apparatus that uses the short-range wireless communication. 7. The imaging apparatus according to claim 6, wherein if a command for polling is received from the information processing apparatus, the control unit transmits a response to the command, and wherein if the response to the command is received, the information processing apparatus determines that the imaging apparatus is mounted, and if the response to the command is not present, the information processing apparatus determines that the imaging apparatus is not mounted. 8. The imaging apparatus according to claim 7, wherein the control unit includes specific information for specifying the imaging apparatus in a response to a check command for reading information relating to the imaging apparatus and thus transmits the response, and wherein if the response to the command is received, the information processing apparatus transmits the check command, and only if the specific information is included in the response to the check command, the information processing apparatus determines that the imaging apparatus is mounted. 9. The imaging apparatus according to claim 6. wherein the control unit transmits a polling response as a response to a polling command that is transmitted by the information processing apparatus using NFC as the short-range wireless communication, and wherein if the polling response is received as the response to the polling command, the information processing apparatus determines that the imaging apparatus is mounted, and if the response to the polling command is not present, the information processing apparatus determines that the imaging apparatus is not mounted. 10. The imaging apparatus according to claim 9, wherein the control unit includes specific information for specifying the imaging apparatus in a check response that is a response to a check command that is transmitted by the information processing apparatus and thus transmits the check response, and wherein if the polling response is received, the information processing apparatus transmits the check command, and only if the specific information is included in the check response that is received as the response to the check command, the information processing apparatus determines that the imaging apparatus is mounted. 11. An information processing apparatus comprising: a wireless communication unit that performs communication between the information processing apparatus and a different information processing apparatus using short-range wireless communication; and a control unit that determines whether or not the different information processing apparatus is mounted, based on a result of the communication with the different information processing apparatus that uses the short-range wireless communication. 12. An imaging system comprising: an imaging apparatus that performs communication between the imaging apparatus and an information processing apparatus using short-range wireless communication; and an information processing apparatus that determines whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication. 13. A method of controlling an information processing apparatus, comprising: performing communication between the information processing apparatus and an imaging apparatus using short-range wireless communication; and determining whether or not the imaging apparatus is mounted, based on a result of the communication with the imaging apparatus that uses the short-range wireless communication. 14. A method of controlling an imaging apparatus, comprising: performing communication between the imaging apparatus and an information processing apparatus using short-range wireless communication; and performing control relating to an imaging operation based on an operation input that is performed in the information processing apparatus that determines whether or not the imaging apparatus is mounted on the information processing apparatus, based on a result of the communication between the imaging apparatus and the information processing apparatus that uses the short-range wireless communication. 15. A program for causing a computer to perform: communication between the computer and an imaging apparatus using short-range wireless communication; and determination of whether or not the imaging apparatus is mounted, based a result of the communication with the imaging apparatus that uses the short-range wireless communication. 16. A program for causing a computer to perform: communication between the computer and an information processing apparatus using short-range wireless communication; and control relating to an imaging operation based on an operation input that is performed in the information processing apparatus that determines whether or not an imaging apparatus is mounted on the information processing apparatus based on a result of the communication between the imaging apparatus and the information processing apparatus that uses the short-range wireless communication.
2,600
10,293
10,293
15,612,611
2,674
A method of providing a screen for manipulating execution of an application of an image forming apparatus, and the image forming apparatus using the method. The method includes an operation of displaying, on the screen, a first user interface for setting options to be applied to the execution of the application, and a second user interface including at least one virtual button for controlling the operation of the image forming apparatus, so that a user may control the image forming apparatus without using physical buttons.
1. A method of providing a screen for manipulating execution of an application of an image forming apparatus, the method comprising: generating a first image signal indicating a first user interface for setting options to be applied to the execution of the application, and a second image signal indicating a second user interface comprising at least one virtual button for controlling an operation of the image forming apparatus; and displaying, based on the first image signal and the second image signal, the first user interface and the second user interface on the screen. 2. The method of claim 1, wherein at least one of a function and an appearance of the at least one virtual button is left unchanged regardless of a change in the application whose options are set by the first user interface. 3. The method of claim 1, wherein the at least one virtual button comprises a first virtual button and a second virtual button and wherein at least one of a function and an appearance of the first virtual button is maintained identical regardless of the application whose options are set by the first user interface while at least one of a function and an appearance of the second virtual button is changed according to the application whose options are set by the first user interface. 4. The method of claim 1, wherein, if the first user interface and the second user interface overlap each other on the screen, the displaying comprises displaying the second user interface on the first user interface. 5. The method of claim 4, wherein a degree of transparency of an entire area or a partial area of the second user interface is greater than a degree of transparency of the first user interface. 6. The method of claim 4, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface disappears from the screen, and if the user's manipulation with respect to the first user interface ends, the second user interface is displayed again on the screen. 7. The method of claim 4, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface is displayed as a substitution icon on an area of the screen, and if the user's manipulation with respect to the substitution icon is input, the second user interface is displayed again on the screen. 8. The method of claim 4, wherein the second user interface can be moved to a random position on the first user interface. 9. The method of claim 1, further comprising, when a user's manipulation with respect to the second user interface occurs, displaying positions to where the second user interface can be moved on the screen; and moving the second user interface to a user-selected position from among the positions and displaying the second user interface at the user-selected position. 10. The method of claim 1, wherein, when the second user interface comprises a plurality of virtual buttons, the displaying comprises displaying the second user interface separated based on each of the plurality of the virtual buttons. 11. The method of claim 1, wherein the first user interface and the second user interface are differently displayed according to types of the application. 12. The method of claim 1, wherein the second user interface further comprises a first virtual button whose shape and function are changed according to the application whose setting options are displayed by the first user interface and wherein the first virtual button is one of a button for starting, stopping, and resetting the operation of the image forming apparatus. 13. A non-transitory computer-readable recording medium having recorded thereon a program for executing the method of claim 1, by using a computer. 14. An image forming apparatus that provides a screen for manipulating execution of an application, the image forming apparatus comprising: an image forming unit; an image processor for generating a first image signal indicating a first user interface for setting options to be applied to the execution of the application, and a second image signal indicating a second user interface comprising at least one virtual button for controlling an operation of the image forming apparatus; and a display for displaying, based on the first image signal and the second image signal, the first user interface and the second user interface on the screen. 15. The apparatus of claim 14, wherein at least one of a function and an appearance of the at least one virtual button is left unchanged even when the image processor changes the application whose options are set by the first user interface. 16. The apparatus of claim 14, wherein the at least one virtual button comprises a first virtual button and a second virtual button and wherein at least one of a function and an appearance of the first virtual button is maintained identical regardless of the application whose options are set by the first user interface while at least one of a function and an appearance of the second virtual button is changed according to the application whose options are set by the first user interface. 17. The image forming apparatus of claim 14, wherein, if the first user interface and the second user interface overlap each other on the screen, the display displays the second user interface on the first user interface. 18. The image forming apparatus of claim 17, wherein a degree of transparency of an entire area or a partial area of the second user interface is greater than a degree of transparency of the first user interface. 19. The image forming apparatus of claim 17, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface disappears from the screen, and if the user's manipulation, with respect to the first user interface, ends, the second user interface is displayed again on the screen. 20. The image forming apparatus of claim 17, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface is displayed as a substitution icon on an area of the screen, and if the user's manipulation with respect to the substitution icon is input, the second user interface is displayed again on the screen. 21. The image forming apparatus of claim 17, wherein the second user interface can be moved to a random position on the first user interface. 22. The image forming apparatus of claim 14, wherein, when a user's manipulation with respect to the second user interface occurs, the display displays positions to where the second user interface can be moved on the screen, moves the second user interface to a user-selected position from among the positions, and displays the second user interface at the user-selected position. 23. An image forming apparatus that provides a screen for manipulating execution of an application, the image forming apparatus comprising: an image forming unit; an image processor to generate a first user interface for setting options to be applied to the execution of the application and a second user interface comprising first and second virtual buttons for controlling an operation of the image forming apparatus, wherein at least one of a function and an appearance of the first virtual button is left identical regardless of the application whose options are set by the first user interface while at least one of a function and an appearance of the second virtual button is changed according to the application whose options are set by the first user interface; and a display to display the first user interface and the second user interface on the screen. 24. The image forming apparatus of claim 23, wherein both the function and the appearance of the first virtual button remain identical regardless of the application being executed by the image forming apparatus. 25. The image forming apparatus of claim 23, wherein when a new application is selected, the first user interface is configured to set the options to be applied to the execution of the newly selected application and wherein both the function and the appearance of the second virtual button are changed to correspond to the newly selected application. 26. The image forming apparatus of claim 23, wherein when a new application is selected, the first user interface is configured to set the options to be applied to the execution of the newly selected application, and wherein both the function and the appearance of the second virtual button are changed to correspond to the newly selected application while the function and the appearance of the first virtual button is left unchanged.
A method of providing a screen for manipulating execution of an application of an image forming apparatus, and the image forming apparatus using the method. The method includes an operation of displaying, on the screen, a first user interface for setting options to be applied to the execution of the application, and a second user interface including at least one virtual button for controlling the operation of the image forming apparatus, so that a user may control the image forming apparatus without using physical buttons.1. A method of providing a screen for manipulating execution of an application of an image forming apparatus, the method comprising: generating a first image signal indicating a first user interface for setting options to be applied to the execution of the application, and a second image signal indicating a second user interface comprising at least one virtual button for controlling an operation of the image forming apparatus; and displaying, based on the first image signal and the second image signal, the first user interface and the second user interface on the screen. 2. The method of claim 1, wherein at least one of a function and an appearance of the at least one virtual button is left unchanged regardless of a change in the application whose options are set by the first user interface. 3. The method of claim 1, wherein the at least one virtual button comprises a first virtual button and a second virtual button and wherein at least one of a function and an appearance of the first virtual button is maintained identical regardless of the application whose options are set by the first user interface while at least one of a function and an appearance of the second virtual button is changed according to the application whose options are set by the first user interface. 4. The method of claim 1, wherein, if the first user interface and the second user interface overlap each other on the screen, the displaying comprises displaying the second user interface on the first user interface. 5. The method of claim 4, wherein a degree of transparency of an entire area or a partial area of the second user interface is greater than a degree of transparency of the first user interface. 6. The method of claim 4, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface disappears from the screen, and if the user's manipulation with respect to the first user interface ends, the second user interface is displayed again on the screen. 7. The method of claim 4, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface is displayed as a substitution icon on an area of the screen, and if the user's manipulation with respect to the substitution icon is input, the second user interface is displayed again on the screen. 8. The method of claim 4, wherein the second user interface can be moved to a random position on the first user interface. 9. The method of claim 1, further comprising, when a user's manipulation with respect to the second user interface occurs, displaying positions to where the second user interface can be moved on the screen; and moving the second user interface to a user-selected position from among the positions and displaying the second user interface at the user-selected position. 10. The method of claim 1, wherein, when the second user interface comprises a plurality of virtual buttons, the displaying comprises displaying the second user interface separated based on each of the plurality of the virtual buttons. 11. The method of claim 1, wherein the first user interface and the second user interface are differently displayed according to types of the application. 12. The method of claim 1, wherein the second user interface further comprises a first virtual button whose shape and function are changed according to the application whose setting options are displayed by the first user interface and wherein the first virtual button is one of a button for starting, stopping, and resetting the operation of the image forming apparatus. 13. A non-transitory computer-readable recording medium having recorded thereon a program for executing the method of claim 1, by using a computer. 14. An image forming apparatus that provides a screen for manipulating execution of an application, the image forming apparatus comprising: an image forming unit; an image processor for generating a first image signal indicating a first user interface for setting options to be applied to the execution of the application, and a second image signal indicating a second user interface comprising at least one virtual button for controlling an operation of the image forming apparatus; and a display for displaying, based on the first image signal and the second image signal, the first user interface and the second user interface on the screen. 15. The apparatus of claim 14, wherein at least one of a function and an appearance of the at least one virtual button is left unchanged even when the image processor changes the application whose options are set by the first user interface. 16. The apparatus of claim 14, wherein the at least one virtual button comprises a first virtual button and a second virtual button and wherein at least one of a function and an appearance of the first virtual button is maintained identical regardless of the application whose options are set by the first user interface while at least one of a function and an appearance of the second virtual button is changed according to the application whose options are set by the first user interface. 17. The image forming apparatus of claim 14, wherein, if the first user interface and the second user interface overlap each other on the screen, the display displays the second user interface on the first user interface. 18. The image forming apparatus of claim 17, wherein a degree of transparency of an entire area or a partial area of the second user interface is greater than a degree of transparency of the first user interface. 19. The image forming apparatus of claim 17, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface disappears from the screen, and if the user's manipulation, with respect to the first user interface, ends, the second user interface is displayed again on the screen. 20. The image forming apparatus of claim 17, wherein, if a user's manipulation with respect to the first user interface is input, the second user interface is displayed as a substitution icon on an area of the screen, and if the user's manipulation with respect to the substitution icon is input, the second user interface is displayed again on the screen. 21. The image forming apparatus of claim 17, wherein the second user interface can be moved to a random position on the first user interface. 22. The image forming apparatus of claim 14, wherein, when a user's manipulation with respect to the second user interface occurs, the display displays positions to where the second user interface can be moved on the screen, moves the second user interface to a user-selected position from among the positions, and displays the second user interface at the user-selected position. 23. An image forming apparatus that provides a screen for manipulating execution of an application, the image forming apparatus comprising: an image forming unit; an image processor to generate a first user interface for setting options to be applied to the execution of the application and a second user interface comprising first and second virtual buttons for controlling an operation of the image forming apparatus, wherein at least one of a function and an appearance of the first virtual button is left identical regardless of the application whose options are set by the first user interface while at least one of a function and an appearance of the second virtual button is changed according to the application whose options are set by the first user interface; and a display to display the first user interface and the second user interface on the screen. 24. The image forming apparatus of claim 23, wherein both the function and the appearance of the first virtual button remain identical regardless of the application being executed by the image forming apparatus. 25. The image forming apparatus of claim 23, wherein when a new application is selected, the first user interface is configured to set the options to be applied to the execution of the newly selected application and wherein both the function and the appearance of the second virtual button are changed to correspond to the newly selected application. 26. The image forming apparatus of claim 23, wherein when a new application is selected, the first user interface is configured to set the options to be applied to the execution of the newly selected application, and wherein both the function and the appearance of the second virtual button are changed to correspond to the newly selected application while the function and the appearance of the first virtual button is left unchanged.
2,600
10,294
10,294
15,644,114
2,652
An apparatus for executing a command associated with a startup condition includes a processor and a memory that stores code executable by the processor to determine a startup condition of a mobile electronic device in a dormant state. The startup condition affects an initial active state of the mobile electronic device upon transition from the dormant state to an active state. The code executable by the processor includes code to select a command associated with the startup condition. The startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state. The code executable by the processor includes code to execute the command during a transition of the mobile electronic device to an active state.
1. An apparatus comprising: a processor; a memory that stores code executable by the processor to: determine a selected startup condition of a plurality of startup conditions of a mobile electronic device in a dormant state, the startup condition affecting an initial active state of the mobile electronic device upon transition from the dormant state to an active state, wherein to determine the selected startup condition comprises the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state, wherein the gesture is correlated to the startup condition; select a command associated with activating the startup condition, wherein the startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state; receive user login information different from the gesture, wherein the user login information directs the mobile electronic device to transition to the active state; and execute the command during a transition of the mobile electronic device to the active state to activate the startup condition, wherein executing the command is in response to the mobile electronic device transitioning to the active state and wherein executing the command during the transition to the active state brings the mobile electronic device to the selected startup condition. 2. (canceled) 3. The apparatus of claim 1, wherein the sensor of the mobile electronic device comprises one or more of a touchscreen, an accelerometer, a proximity sensor and a gyroscope. 4. The apparatus of claim 3, wherein the gesture comprises one or more of: touches to the touchscreen; and swipes across the touchscreen. 5. The apparatus of claim 3, wherein the gesture comprises one or more movements of the mobile electronic device in a particular pattern. 6. (canceled) 7. (canceled) 8. The apparatus of claim 1, wherein to receive a gesture comprises the processor to receive a plurality of gestures, each gesture mapped to a different command, and further comprising code executable by the processor to place the received gestures in a gesture queue in an order that the gestures are received, wherein to execute the command comprises the processor to execute at least a first command in the gesture queue. 9. The apparatus of claim 8, wherein to execute the command comprises the processor to one or more of: execute each command in the gesture queue in an order dictated by a command execution priority; and execute each command in the gesture queue in an order of last received gesture to first received gesture. 10. The apparatus of claim 1, wherein to receive a gesture comprises the processor to receive a plurality of gestures and to execute the command comprises the processor to execute a command correlated with a last received gesture while ignoring previously received gestures. 11. The apparatus of claim 1, wherein the determined startup condition comprises a scheduled state at a time that the mobile electronic device transitions to the active state, wherein to select a command associated with the startup condition comprises the processor to select a command associated with the scheduled state. 12. The apparatus of claim 11, wherein code executable by the processor to determine the startup condition further comprises the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state and, wherein to select a command associated with the startup condition comprises the processor to select a command associated with a combination of the gesture and the scheduled state. 13. The apparatus of claim 1, wherein the determined startup condition comprises a state of lighting of surroundings of the mobile electronic device at a time that the mobile electronic device transitions to the active state, the state of lighting determined by an amount of light sensed by a light sensor of the mobile electronic device, wherein to select a command associated with the startup condition comprises the processor to select a command associated with the state of lighting. 14. The apparatus of claim 13, wherein code executable by the processor to determine the startup condition further comprises code executable by the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state and, wherein to select a command associated with the startup condition comprises the processor to select a command associated with a combination of the gesture and the state of lighting. 15. The apparatus of claim 1, wherein the command comprises one or more of: sending a message to summons help; executing a camera application to operate a camera of the mobile electronic device; opening an email application; opening a text messaging application; playing a sound on the mobile electronic device; preventing display of personal information; opening a gaming application; opening a note taking application; opening a voice recording application; opening a flashlight application; opening a calculator application; opening a social media application; and opening a media display application. 16. A method comprising: determining a selected startup condition of a plurality of startup conditions of a mobile electronic device in a dormant state, the startup condition affecting an initial active state of the mobile electronic device upon transition from the dormant state to an active state, wherein determining the selected startup condition comprises receiving a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state, wherein the gesture is correlated to the startup condition; selecting a command associated with activating the startup condition, wherein the startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state; receiving user login information different from the gesture, wherein the user login information directs the mobile electronic device to transition to the active state; and executing the command during a transition of the mobile electronic device to the active state to activate the startup condition, wherein executing the command is in response to the mobile electronic device transitioning to the active state and wherein executing the command during the transition to the active state brings the mobile electronic device to the selected startup condition. 17. (canceled) 18. The method of claim 16, wherein the determined startup condition comprises a scheduled state at a time that the mobile electronic device transitions to the active state, wherein selecting a command associated with the startup condition comprises selecting a command associated with the scheduled state. 19. The method of claim 16, wherein the determined startup condition comprises a state of lighting of surroundings of the mobile electronic device at a time that the mobile electronic device transitions to the active state, the state of lighting determined by an amount of light sensed by a light sensor of the mobile electronic device, wherein selecting a command associated with the startup condition comprises selecting a command associated with the state of lighting. 20. A program product comprising a computer readable storage medium that stores code executable by a processor, the executable code comprising code to: determine a selected startup condition of a plurality of startup conditions of a mobile electronic device in a dormant state, the startup condition affecting an initial active state of the mobile electronic device upon transition from the dormant state to an active state, wherein to determine the selected startup condition comprises the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state, wherein the gesture is correlated to the startup condition; select a command associated with activating the startup condition, wherein the startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state; receiving user login information different from the gesture, wherein the user login information directs the mobile electronic device to transition to the active state; and execute the command during a transition of the mobile electronic device to the active state to activate the startup condition, wherein executing the command is in response to the mobile electronic device transitioning to the active state and wherein executing the command during the transition to the active state brings the mobile electronic device to the selected startup condition. 21. The apparatus of claim 1, wherein the mobile electronic device is in the active state when the functionality of the mobile electronic device is active. 22. The method of claim 16, wherein the mobile electronic device is in the active state when the functionality of the mobile electronic device is active.
An apparatus for executing a command associated with a startup condition includes a processor and a memory that stores code executable by the processor to determine a startup condition of a mobile electronic device in a dormant state. The startup condition affects an initial active state of the mobile electronic device upon transition from the dormant state to an active state. The code executable by the processor includes code to select a command associated with the startup condition. The startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state. The code executable by the processor includes code to execute the command during a transition of the mobile electronic device to an active state.1. An apparatus comprising: a processor; a memory that stores code executable by the processor to: determine a selected startup condition of a plurality of startup conditions of a mobile electronic device in a dormant state, the startup condition affecting an initial active state of the mobile electronic device upon transition from the dormant state to an active state, wherein to determine the selected startup condition comprises the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state, wherein the gesture is correlated to the startup condition; select a command associated with activating the startup condition, wherein the startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state; receive user login information different from the gesture, wherein the user login information directs the mobile electronic device to transition to the active state; and execute the command during a transition of the mobile electronic device to the active state to activate the startup condition, wherein executing the command is in response to the mobile electronic device transitioning to the active state and wherein executing the command during the transition to the active state brings the mobile electronic device to the selected startup condition. 2. (canceled) 3. The apparatus of claim 1, wherein the sensor of the mobile electronic device comprises one or more of a touchscreen, an accelerometer, a proximity sensor and a gyroscope. 4. The apparatus of claim 3, wherein the gesture comprises one or more of: touches to the touchscreen; and swipes across the touchscreen. 5. The apparatus of claim 3, wherein the gesture comprises one or more movements of the mobile electronic device in a particular pattern. 6. (canceled) 7. (canceled) 8. The apparatus of claim 1, wherein to receive a gesture comprises the processor to receive a plurality of gestures, each gesture mapped to a different command, and further comprising code executable by the processor to place the received gestures in a gesture queue in an order that the gestures are received, wherein to execute the command comprises the processor to execute at least a first command in the gesture queue. 9. The apparatus of claim 8, wherein to execute the command comprises the processor to one or more of: execute each command in the gesture queue in an order dictated by a command execution priority; and execute each command in the gesture queue in an order of last received gesture to first received gesture. 10. The apparatus of claim 1, wherein to receive a gesture comprises the processor to receive a plurality of gestures and to execute the command comprises the processor to execute a command correlated with a last received gesture while ignoring previously received gestures. 11. The apparatus of claim 1, wherein the determined startup condition comprises a scheduled state at a time that the mobile electronic device transitions to the active state, wherein to select a command associated with the startup condition comprises the processor to select a command associated with the scheduled state. 12. The apparatus of claim 11, wherein code executable by the processor to determine the startup condition further comprises the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state and, wherein to select a command associated with the startup condition comprises the processor to select a command associated with a combination of the gesture and the scheduled state. 13. The apparatus of claim 1, wherein the determined startup condition comprises a state of lighting of surroundings of the mobile electronic device at a time that the mobile electronic device transitions to the active state, the state of lighting determined by an amount of light sensed by a light sensor of the mobile electronic device, wherein to select a command associated with the startup condition comprises the processor to select a command associated with the state of lighting. 14. The apparatus of claim 13, wherein code executable by the processor to determine the startup condition further comprises code executable by the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state and, wherein to select a command associated with the startup condition comprises the processor to select a command associated with a combination of the gesture and the state of lighting. 15. The apparatus of claim 1, wherein the command comprises one or more of: sending a message to summons help; executing a camera application to operate a camera of the mobile electronic device; opening an email application; opening a text messaging application; playing a sound on the mobile electronic device; preventing display of personal information; opening a gaming application; opening a note taking application; opening a voice recording application; opening a flashlight application; opening a calculator application; opening a social media application; and opening a media display application. 16. A method comprising: determining a selected startup condition of a plurality of startup conditions of a mobile electronic device in a dormant state, the startup condition affecting an initial active state of the mobile electronic device upon transition from the dormant state to an active state, wherein determining the selected startup condition comprises receiving a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state, wherein the gesture is correlated to the startup condition; selecting a command associated with activating the startup condition, wherein the startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state; receiving user login information different from the gesture, wherein the user login information directs the mobile electronic device to transition to the active state; and executing the command during a transition of the mobile electronic device to the active state to activate the startup condition, wherein executing the command is in response to the mobile electronic device transitioning to the active state and wherein executing the command during the transition to the active state brings the mobile electronic device to the selected startup condition. 17. (canceled) 18. The method of claim 16, wherein the determined startup condition comprises a scheduled state at a time that the mobile electronic device transitions to the active state, wherein selecting a command associated with the startup condition comprises selecting a command associated with the scheduled state. 19. The method of claim 16, wherein the determined startup condition comprises a state of lighting of surroundings of the mobile electronic device at a time that the mobile electronic device transitions to the active state, the state of lighting determined by an amount of light sensed by a light sensor of the mobile electronic device, wherein selecting a command associated with the startup condition comprises selecting a command associated with the state of lighting. 20. A program product comprising a computer readable storage medium that stores code executable by a processor, the executable code comprising code to: determine a selected startup condition of a plurality of startup conditions of a mobile electronic device in a dormant state, the startup condition affecting an initial active state of the mobile electronic device upon transition from the dormant state to an active state, wherein to determine the selected startup condition comprises the processor to receive a gesture from a user of a mobile electronic device, the gesture received through a sensor of the mobile electronic device while the mobile electronic device is in a dormant state, wherein the gesture is correlated to the startup condition; select a command associated with activating the startup condition, wherein the startup condition differs from a default startup condition of the mobile electronic device upon a transition from the dormant state to a default active state; receiving user login information different from the gesture, wherein the user login information directs the mobile electronic device to transition to the active state; and execute the command during a transition of the mobile electronic device to the active state to activate the startup condition, wherein executing the command is in response to the mobile electronic device transitioning to the active state and wherein executing the command during the transition to the active state brings the mobile electronic device to the selected startup condition. 21. The apparatus of claim 1, wherein the mobile electronic device is in the active state when the functionality of the mobile electronic device is active. 22. The method of claim 16, wherein the mobile electronic device is in the active state when the functionality of the mobile electronic device is active.
2,600
10,295
10,295
15,219,934
2,689
A method for identifying a user of an electronic device is presented. In the method, for each of a plurality of users, a bioelectrical impedance of the user is measured, a value based on the measurement is generated, and the value is associated with information corresponding to the user. A bioelectrical impedance of a current user of the electronic device is also measured, and a value based on this measurement is generated. The value associated with the current user is compared with at least one of the values associated with the plurality of users. In response to the comparison, the electronic device is operated based on the information corresponding to one of the plurality of users in response to the current user interacting with the electronic device if the value associated with the current user indicates the current user is the one of the plurality of users.
1. A method of identifying a user of an electronic device, the method comprising: measuring a bioelectrical impedance of the user; generating a value based on the measured bioelectrical impedance of the user; and measuring a bioelectrical impedance of a current user of the electronic device; generating a value based on the measured bioelectrical impedance of the current user; comparing the value associated with the current user with the value associated with the user; and replacing the value associated with the user with the value associated with the current user when the value associated with the current user indicates that the current user is the user. 2. The method of claim 1, wherein: measuring the bioelectrical impedance of the user comprises measuring a resistance of the user. 3. The method of claim 1, wherein: measuring the bioelectrical impedance of the user comprises measuring a reactance of the user. 4. The method of claim 1, wherein: measuring the bioelectrical impedance of the user comprises measuring a bioelectrical impedance across two locations on a hand of the user. 5. The method of claim 1, further comprising associating the value with information corresponding to the user, and wherein: the electronic device comprises a media content receiver. 6. The method of claim 5, wherein: the information corresponding to the user comprises a favorite channels list associated with the user. 7. The method of claim 5, wherein: the information corresponding to the user comprises programming recommendations for the user. 8. The method of claim 5, wherein: the information corresponding to the user comprises parental control information associated with the user. 9. The method of claim 5, wherein: the information corresponding to the user comprises purchase information associated with the user. 10. The method of claim 5, wherein: the information corresponding to the user comprises peer group information associated with the user. 11. The method of claim 1, wherein: the value associated with the current user indicates the current user is the user if the value associated with the current user is within one of a predetermined difference and a predetermined percentage of the value of the user. 12. The method of claim 1, wherein: the value associated with the current user indicates that the current user is the one of the plurality of users if the value associated with the current user is closer to the value associated with the user than to another value associated with any other of a plurality of users that include the user. 13. (canceled) 14. The method of claim 1, further comprising: re-measuring the bioelectrical impedance of the user if the value associated with the user resides outside a predetermined range. 15. The method of claim 1, further comprising: re-measuring the bioelectrical impedance of the current user if the value associated with the current user resides outside a range associated with values associated with a plurality of users that includes the user. 16. The method of claim 1, further comprising: receiving an indication from the current user that the current user is not one of a plurality of users of the electronic device; and in response to receiving the indication, associating the value associated with the current user with information corresponding to the current user. 17. The method of claim 1, further comprising: in response to the comparison, operating the electronic device based on default information in response to the current user interacting with the electronic device if the value associated with the current user indicates the current user is not the user. 18. The method of claim 1, further comprising: in response to the comparison, associating the value associated with the current user with information corresponding to the current user if the value associated with the current user indicates the current user is not the user. 19. An electronic device, comprising: a memory; a communication interface configured to receive values based on measured bioelectrical impedances a user of the electronic device, and to receive user commands for the electronic device; and control logic configured to: identify a current user based on a measured bioelectrical impedance from the communication interface; and in response to the identification, control the operation of the electronic device based on peer group information corresponding to the user in response to user commands initiated by the current user. 20. The electronic device of claim 19, further comprising: a content input interface configured to receive media content; and a content output interface configured to present the received media content to the user; wherein the control logic is further configured to control the content input interface and the content output interface based on the information corresponding to the current user when the current user is interacting with the electronic device.
A method for identifying a user of an electronic device is presented. In the method, for each of a plurality of users, a bioelectrical impedance of the user is measured, a value based on the measurement is generated, and the value is associated with information corresponding to the user. A bioelectrical impedance of a current user of the electronic device is also measured, and a value based on this measurement is generated. The value associated with the current user is compared with at least one of the values associated with the plurality of users. In response to the comparison, the electronic device is operated based on the information corresponding to one of the plurality of users in response to the current user interacting with the electronic device if the value associated with the current user indicates the current user is the one of the plurality of users.1. A method of identifying a user of an electronic device, the method comprising: measuring a bioelectrical impedance of the user; generating a value based on the measured bioelectrical impedance of the user; and measuring a bioelectrical impedance of a current user of the electronic device; generating a value based on the measured bioelectrical impedance of the current user; comparing the value associated with the current user with the value associated with the user; and replacing the value associated with the user with the value associated with the current user when the value associated with the current user indicates that the current user is the user. 2. The method of claim 1, wherein: measuring the bioelectrical impedance of the user comprises measuring a resistance of the user. 3. The method of claim 1, wherein: measuring the bioelectrical impedance of the user comprises measuring a reactance of the user. 4. The method of claim 1, wherein: measuring the bioelectrical impedance of the user comprises measuring a bioelectrical impedance across two locations on a hand of the user. 5. The method of claim 1, further comprising associating the value with information corresponding to the user, and wherein: the electronic device comprises a media content receiver. 6. The method of claim 5, wherein: the information corresponding to the user comprises a favorite channels list associated with the user. 7. The method of claim 5, wherein: the information corresponding to the user comprises programming recommendations for the user. 8. The method of claim 5, wherein: the information corresponding to the user comprises parental control information associated with the user. 9. The method of claim 5, wherein: the information corresponding to the user comprises purchase information associated with the user. 10. The method of claim 5, wherein: the information corresponding to the user comprises peer group information associated with the user. 11. The method of claim 1, wherein: the value associated with the current user indicates the current user is the user if the value associated with the current user is within one of a predetermined difference and a predetermined percentage of the value of the user. 12. The method of claim 1, wherein: the value associated with the current user indicates that the current user is the one of the plurality of users if the value associated with the current user is closer to the value associated with the user than to another value associated with any other of a plurality of users that include the user. 13. (canceled) 14. The method of claim 1, further comprising: re-measuring the bioelectrical impedance of the user if the value associated with the user resides outside a predetermined range. 15. The method of claim 1, further comprising: re-measuring the bioelectrical impedance of the current user if the value associated with the current user resides outside a range associated with values associated with a plurality of users that includes the user. 16. The method of claim 1, further comprising: receiving an indication from the current user that the current user is not one of a plurality of users of the electronic device; and in response to receiving the indication, associating the value associated with the current user with information corresponding to the current user. 17. The method of claim 1, further comprising: in response to the comparison, operating the electronic device based on default information in response to the current user interacting with the electronic device if the value associated with the current user indicates the current user is not the user. 18. The method of claim 1, further comprising: in response to the comparison, associating the value associated with the current user with information corresponding to the current user if the value associated with the current user indicates the current user is not the user. 19. An electronic device, comprising: a memory; a communication interface configured to receive values based on measured bioelectrical impedances a user of the electronic device, and to receive user commands for the electronic device; and control logic configured to: identify a current user based on a measured bioelectrical impedance from the communication interface; and in response to the identification, control the operation of the electronic device based on peer group information corresponding to the user in response to user commands initiated by the current user. 20. The electronic device of claim 19, further comprising: a content input interface configured to receive media content; and a content output interface configured to present the received media content to the user; wherein the control logic is further configured to control the content input interface and the content output interface based on the information corresponding to the current user when the current user is interacting with the electronic device.
2,600
10,296
10,296
14,691,819
2,625
Visual images projected on a projection surface by a projector provide an interactive user interface having end user inputs detected by a detection device, such as a depth camera. The detection device monitors projected images initiated in response to user inputs to determine calibration deviations, such as by comparing the distance between where a user makes an input and where the input is projected. Calibration is performed to align the projected outputs and detected inputs. The calibration may include a coordinate system anchored by its origin to a physical reference point of the projection surface, such as a display mat or desktop edge.
1. An information handling system having a user interface presented at a projection surface, the information handling system comprising: a processor operable to process information for presentation to a user; memory interfaced with the processor and operable to store the information; a graphics system interfaced with the processor and memory, the graphics system operable to generate pixel information to create visual images at one or more display devices; a projector interfaced with the graphics system and operable to project the pixel information as the visual images at the projection surface; a detection device interfaced with the graphics system and operable to detect end user inputs made at the projection surface; and a calibration engine interfaced with the graphics system and detection device, the calibration engine operable to compare a location of an end user input detected by the detection device with a location of a visual image created by the end user input to determine a calibration deviation and to initiate calibration of the projector relative to the projection surface if the calibration deviation exceeds a threshold. 2. The system of claim 1 wherein the calibration engine initiates calibration with an infrared emitter and infrared camera to compare alignment of an infrared image with the projected visual images. 3. The system of claim 1 wherein the calibration engine initiates calibration by moving the projected visual images presented by the projector by the calibration deviation to the end user input location. 4. The system of claim 1 further comprising: a display mat disposed on the desktop, the display mat interfaced with the graphics system and operable to display the visual images, the display mat including capacitive sensors to detect end user input locations at the display mat; wherein the calibration engine is further operable to compare end user input locations detected by the display mat and end user input locations detected by the detection device to determine the calibration deviation. 5. The system of claim 4 further comprising: a user interface engine interfaced with the detection device and operable to establish a coordinate system anchored to a physical reference detected by the detection device at the projection surface; wherein the calibration engine initiates calibration of the projector relative to the projection surface to align projection with the coordinate system. 6. The system of claim 5 wherein the projection surface comprises a desktop and the physical reference comprises an increase in distance detected by the detection device along an edge of the desktop. 7. The system of claim 5 wherein the physical reference comprises a perimeter of the display mat. 8. The system of claim 5 wherein the physical reference comprises a predetermined location of the display mat indicated by illumination of a predetermined image at the display mat. 9. A method for calibrating an information handling system projected user interface and end user input locations, the method comprising: capturing an image of a projection surface with a detection device; analyzing the image to determine an end user input location; projecting an output at an output location of the projection surface in response to the end user input; capturing an image of the output with the detection device; analyzing the image of the output to determine the output location; and comparing the output location with the input location to determine calibration deviation. 10. The method of claim 9 wherein: the end user input comprises writing on the projection surface; the output comprises a projected image of the writing proximate the input location; and the calibration deviation comprises difference between the location of the projected image of the writing and an intended location of the projected image of the writing. 11. The method of claim 10 further comprising: determining that the calibration deviation exceeds a threshold; and in response to the determining, initiating a calibration to align the input and output locations. 12. The method of claim 11 wherein initiating a calibration further comprises: presenting an infrared image at the projection surface; and capturing the infrared image with a camera to align the input and output locations. 13. The method of claim 9 further comprising: capturing with the detection device a physical reference associated with the projection surface; defining a projection area with Cartesian coordinates anchored at the physical reference; and in response to the detecting the calibration deviation, calibrating the detection device and the projector to the Cartesian coordinates. 14. The method of claim 13 wherein the physical reference comprises a desktop edge and the detection device comprises a depth camera, the method further comprising: detecting with the depth camera an increase in measured distance to define a position of the desktop edge; defining the Cartesian coordinates from an origin located along the desktop edge; and periodically monitoring for calibration deviation of projected images relative to desktop edge by reference to the Cartesian coordinates. 15. The method of claim 14 further comprising: detecting from the image of the desktop surface a position of a display mat disposed on the desktop surface; defining the position of the display mat relative to the projection area coordinates; and calibrating the depth camera and projector by reference to the detected position of the display mat. 16. The method of claim 13 wherein the physical reference comprises a display mat disposed on the projection surface, the display mat operable to present images from the information handling system and accept touch inputs to provide the touch inputs to the information handling system, the method further comprising: presenting a calibration image at a position on the display mat; and in response to detection of the calibration image through a depth camera image, calibrating the depth camera and projector to calibration image position. 17. The method of claim 9 wherein the detection device comprises a depth camera, the end user input comprises writing motions by a pen at the projection surface, and the projected output comprises an image projected at the locations of the writing motions. 18. A system for calibrating an information handling system projected user interface and end user input locations comprising: a projector operable to project images at a projection surface; a depth camera operable to capture an image of the projection surface and to detect end user inputs made at the projection surface; and a calibration engine interfaced with the projector and depth camera, the calibration engine operable to compare a location of an end user input detected by the depth camera with a location of a visual image projected by the projector in response to the end user input to determine a calibration deviation and to initiate calibration of the projector relative to the projection surface if the calibration deviation exceeds a threshold. 19. The system of claim 18 wherein the calibration engine calibrates the projector to Cartesian coordinates having an origin anchored to a display mat resting on the projection surface. 20. The system of claim 18 wherein the projection surface comprises a desktop and the calibration engine calibrates the projector to Cartesian coordinates having an origin anchored to a desktop edge detected by the depth camera.
Visual images projected on a projection surface by a projector provide an interactive user interface having end user inputs detected by a detection device, such as a depth camera. The detection device monitors projected images initiated in response to user inputs to determine calibration deviations, such as by comparing the distance between where a user makes an input and where the input is projected. Calibration is performed to align the projected outputs and detected inputs. The calibration may include a coordinate system anchored by its origin to a physical reference point of the projection surface, such as a display mat or desktop edge.1. An information handling system having a user interface presented at a projection surface, the information handling system comprising: a processor operable to process information for presentation to a user; memory interfaced with the processor and operable to store the information; a graphics system interfaced with the processor and memory, the graphics system operable to generate pixel information to create visual images at one or more display devices; a projector interfaced with the graphics system and operable to project the pixel information as the visual images at the projection surface; a detection device interfaced with the graphics system and operable to detect end user inputs made at the projection surface; and a calibration engine interfaced with the graphics system and detection device, the calibration engine operable to compare a location of an end user input detected by the detection device with a location of a visual image created by the end user input to determine a calibration deviation and to initiate calibration of the projector relative to the projection surface if the calibration deviation exceeds a threshold. 2. The system of claim 1 wherein the calibration engine initiates calibration with an infrared emitter and infrared camera to compare alignment of an infrared image with the projected visual images. 3. The system of claim 1 wherein the calibration engine initiates calibration by moving the projected visual images presented by the projector by the calibration deviation to the end user input location. 4. The system of claim 1 further comprising: a display mat disposed on the desktop, the display mat interfaced with the graphics system and operable to display the visual images, the display mat including capacitive sensors to detect end user input locations at the display mat; wherein the calibration engine is further operable to compare end user input locations detected by the display mat and end user input locations detected by the detection device to determine the calibration deviation. 5. The system of claim 4 further comprising: a user interface engine interfaced with the detection device and operable to establish a coordinate system anchored to a physical reference detected by the detection device at the projection surface; wherein the calibration engine initiates calibration of the projector relative to the projection surface to align projection with the coordinate system. 6. The system of claim 5 wherein the projection surface comprises a desktop and the physical reference comprises an increase in distance detected by the detection device along an edge of the desktop. 7. The system of claim 5 wherein the physical reference comprises a perimeter of the display mat. 8. The system of claim 5 wherein the physical reference comprises a predetermined location of the display mat indicated by illumination of a predetermined image at the display mat. 9. A method for calibrating an information handling system projected user interface and end user input locations, the method comprising: capturing an image of a projection surface with a detection device; analyzing the image to determine an end user input location; projecting an output at an output location of the projection surface in response to the end user input; capturing an image of the output with the detection device; analyzing the image of the output to determine the output location; and comparing the output location with the input location to determine calibration deviation. 10. The method of claim 9 wherein: the end user input comprises writing on the projection surface; the output comprises a projected image of the writing proximate the input location; and the calibration deviation comprises difference between the location of the projected image of the writing and an intended location of the projected image of the writing. 11. The method of claim 10 further comprising: determining that the calibration deviation exceeds a threshold; and in response to the determining, initiating a calibration to align the input and output locations. 12. The method of claim 11 wherein initiating a calibration further comprises: presenting an infrared image at the projection surface; and capturing the infrared image with a camera to align the input and output locations. 13. The method of claim 9 further comprising: capturing with the detection device a physical reference associated with the projection surface; defining a projection area with Cartesian coordinates anchored at the physical reference; and in response to the detecting the calibration deviation, calibrating the detection device and the projector to the Cartesian coordinates. 14. The method of claim 13 wherein the physical reference comprises a desktop edge and the detection device comprises a depth camera, the method further comprising: detecting with the depth camera an increase in measured distance to define a position of the desktop edge; defining the Cartesian coordinates from an origin located along the desktop edge; and periodically monitoring for calibration deviation of projected images relative to desktop edge by reference to the Cartesian coordinates. 15. The method of claim 14 further comprising: detecting from the image of the desktop surface a position of a display mat disposed on the desktop surface; defining the position of the display mat relative to the projection area coordinates; and calibrating the depth camera and projector by reference to the detected position of the display mat. 16. The method of claim 13 wherein the physical reference comprises a display mat disposed on the projection surface, the display mat operable to present images from the information handling system and accept touch inputs to provide the touch inputs to the information handling system, the method further comprising: presenting a calibration image at a position on the display mat; and in response to detection of the calibration image through a depth camera image, calibrating the depth camera and projector to calibration image position. 17. The method of claim 9 wherein the detection device comprises a depth camera, the end user input comprises writing motions by a pen at the projection surface, and the projected output comprises an image projected at the locations of the writing motions. 18. A system for calibrating an information handling system projected user interface and end user input locations comprising: a projector operable to project images at a projection surface; a depth camera operable to capture an image of the projection surface and to detect end user inputs made at the projection surface; and a calibration engine interfaced with the projector and depth camera, the calibration engine operable to compare a location of an end user input detected by the depth camera with a location of a visual image projected by the projector in response to the end user input to determine a calibration deviation and to initiate calibration of the projector relative to the projection surface if the calibration deviation exceeds a threshold. 19. The system of claim 18 wherein the calibration engine calibrates the projector to Cartesian coordinates having an origin anchored to a display mat resting on the projection surface. 20. The system of claim 18 wherein the projection surface comprises a desktop and the calibration engine calibrates the projector to Cartesian coordinates having an origin anchored to a desktop edge detected by the depth camera.
2,600
10,297
10,297
15,819,127
2,657
An approach is provided that receives an audio stream and utilizes a voice activation detection (VAD) process to create a digital audio stream of voices from at least two different speakers. An automatic speech recognition (ASR) process is applied to the digital stream with the ASR process resulting in the spoken words to which a speaker turn detection (STD) process is applied to identify a number of speaker segments with each speaker segment ending at a word boundary. A speaker clustering algorithm is then applied to the speaker segments to associate one of the speakers with each of the speaker segments.
1. A method implemented by an information handling system that includes a memory and a processor, the method comprising: receiving an audio stream that comprises both a plurality of speech segments corresponding to a plurality of human speakers and a plurality of non-verbal segments; utilizing a voice activation detection (VAD) process on the audio stream, wherein an output of the VAD process is a digital audio stream of voices corresponding to the plurality of speech segments; inputting the VAD process output into an automatic speech recognition (ASR) process, wherein an output of the ASR process comprises in a plurality of spoken words corresponding to the plurality of speech segments and is devoid of the plurality of non-verbal segments; inputting the ASR process output to a speaker turn detection (STD) process, wherein the STD process generates a plurality of speaker segments that each end at a word boundary of one of the plurality of spoken words; and applying a speaker clustering algorithm to the plurality of speaker segments, wherein the speaker clustering algorithm associates an identifier of one of the human speakers with each of the speaker segments. 2. The method of claim 1 further comprising: generating a textual transcript of the audio stream by outputting each of the speaker segments and the identifier of the associated human speaker. 3. The method of claim 1 further comprising: ingesting the textual transcript into a question answering (QA) system corpus. 4. The method of claim 1 further comprising: identifying a plurality of sets of vocal qualities from the audio stream, wherein each of the sets of vocal qualities corresponds to a different one of the plurality of human speakers; comparing the plurality of sets of vocal qualities to each of the plurality of spoken words; and associating one of the human speakers to each of the words based on the comparison. 5. The method of claim 4 wherein a change from a first of the plurality of human speaker to a second of the plurality of human speakers is limited to word boundaries found in the plurality of spoken words. 6. The method of claim 1 wherein the speaker detection process further comprises: associating a first word from the plurality of spoken words to a first set of vocal qualities; identifying a second word from the plurality of spoken words that is successive to the first word and corresponds to a second set of vocal qualities; inserting a speaker change mark between the first word and the second word in response to determining that the first set of vocal qualities is different from the second set of vocal qualities; adjusting a speaker change probability value in response to determining that the first word is at an end of a question; and maintaining the speaker change mark between the first word and the second word based on the adjusted speaker change probability value. 7. The method of claim 6 further comprising: analyzing a selected one of the speaker segments corresponding to the first word using a language model, wherein the analysis: increases the speaker change probability value in response to the selected speaker segment indicating a statement; increases the speaker change probability value in response to the selected speaker segment indicating a reply; and decreases the speaker change probability value in response to the selected speaker segment indicating a continuation of a previous speaker segment; and identifying the second word based on the speaker change probability value and the comparison of the second word to the first set of vocal qualities. 8. An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; and a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of: receiving an audio stream that comprises both a plurality of speech segments corresponding to a plurality of human speakers and a plurality of non-verbal segments; utilizing a voice activation detection (VAD) process on the audio stream, wherein an output of the VAD process is a digital audio stream of voices corresponding to the plurality of speech segments; inputting the VAD process output into an automatic speech recognition (ASR) process, wherein an output of the ASR process comprises in a plurality of spoken words corresponding to the plurality of speech segments and is devoid of the plurality of non-verbal segments; inputting the ASR process output to a speaker turn detection (STD) process, wherein the STD process generates a plurality of speaker segments that each end at a word boundary of one of the plurality of spoken words; and applying a speaker clustering algorithm to the plurality of speaker segments, wherein the speaker clustering algorithm associates an identifier of one of the human speakers with each of the speaker segments. 9. The information handling system of claim 8 wherein the actions further comprise: generating a textual transcript of the audio stream by outputting each of the speaker segments and the identifier of the associated human speaker. 10. The information handling system of claim 8 wherein the actions further comprise: ingesting the textual transcript into a question answering (QA) system corpus. 11. The information handling system of claim 8 wherein the actions further comprise: identifying a plurality of sets of vocal qualities from the audio stream, wherein each of the sets of vocal qualities corresponds to a different one of the plurality of human speakers; comparing the plurality of sets of vocal qualities to each of the plurality of spoken words; and associating one of the human speakers to each of the words based on the comparison. 12. The information handling system of claim 11 wherein a change from a first of the plurality of human speaker to a second of the plurality of human speakers is limited to word boundaries found in the plurality of spoken words. 13. The information handling system of claim 8 wherein the actions further comprise: associating a first word from the plurality of spoken words to a first set of vocal qualities; identifying a second word from the plurality of spoken words that is successive to the first word and corresponds to a second set of vocal qualities; inserting a speaker change mark between the first word and the second word in response to determining that the first set of vocal qualities is different from the second set of vocal qualities; adjusting a speaker change probability value in response to determining that the first word is at an end of a question; and maintaining the speaker change mark between the first word and the second word based on the adjusted speaker change probability value. 14. The information handling system of claim 13 wherein the actions further comprise: analyzing a selected one of the speaker segments corresponding to the first word using a language model, wherein the analysis: increases the speaker change probability value in response to the selected speaker segment indicating a statement; increases the speaker change probability value in response to the selected speaker segment indicating a reply; and decreases the speaker change probability value in response to the selected speaker segment indicating a continuation of a previous speaker segment; and identifying the second word based on the speaker change probability value and the comparison of the second word to the first set of vocal qualities. 15. A computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system to perform actions comprising: receiving an audio stream that comprises both a plurality of speech segments corresponding to a plurality of human speakers and a plurality of non-verbal segments; utilizing a voice activation detection (VAD) process on the audio stream, wherein an output of the VAD process is a digital audio stream of voices corresponding to the plurality of speech segments; inputting the VAD process output into an automatic speech recognition (ASR) process, wherein an output of the ASR process comprises in a plurality of spoken words corresponding to the plurality of speech segments and is devoid of the plurality of non-verbal segments; inputting the ASR process output to applying a speaker turn detection (STD) process, wherein the STD process generates a plurality of speaker segments that each end at a word boundary of one of the plurality of spoken words; and applying a speaker clustering algorithm to the plurality of speaker segments, wherein the speaker clustering algorithm associates an identifier of one of the human speakers with each of the speaker segments. 16. The computer program product of claim 15 wherein the actions further comprise: generating a textual transcript of the audio stream by outputting each of the speaker segments and the identifier of the associated human speaker; and ingesting the textual transcript into a question answering (QA) system corpus. 17. The computer program product of claim 15 wherein the actions further comprise: identifying a plurality of sets of vocal qualities from the audio stream, wherein each of the sets of vocal qualities corresponds to a different one of the plurality of human speakers; comparing the plurality of sets of vocal qualities to each of the plurality of spoken words; and associating one of the human speakers to each of the words based on the comparison. 18. The computer program product of claim 17 wherein a change from a first of the plurality of human speaker to a second of the plurality of human speakers is limited to word boundaries found in the plurality of spoken words. 19. The computer program product of claim 15 wherein the actions further comprise: associating a first word from the plurality of spoken words to a first set of vocal qualities; identifying a second word from the plurality of spoken words that is successive to the first word and corresponds to a second set of vocal qualities; inserting a speaker change mark between the first word and the second word in response to determining that the first set of vocal qualities is different from the second set of vocal qualities; adjusting a speaker change probability value in response to determining that the first word is at an end of a question; and maintaining the speaker change mark between the first word and the second word based on the adjusted speaker change probability value. 20. The computer program product of claim 19 wherein the actions further comprise: analyzing a selected one of the speaker segments corresponding to the first word using a language model, wherein the analysis: increases the speaker change probability value in response to the selected speaker segment indicating a statement; increases the speaker change probability value in response to the selected speaker segment indicating a reply; and decreases the speaker change probability value in response to the selected speaker segment indicating a continuation of a previous speaker segment; and identifying the second word based on the speaker change probability value and the comparison of the second word to the first set of vocal qualities.
An approach is provided that receives an audio stream and utilizes a voice activation detection (VAD) process to create a digital audio stream of voices from at least two different speakers. An automatic speech recognition (ASR) process is applied to the digital stream with the ASR process resulting in the spoken words to which a speaker turn detection (STD) process is applied to identify a number of speaker segments with each speaker segment ending at a word boundary. A speaker clustering algorithm is then applied to the speaker segments to associate one of the speakers with each of the speaker segments.1. A method implemented by an information handling system that includes a memory and a processor, the method comprising: receiving an audio stream that comprises both a plurality of speech segments corresponding to a plurality of human speakers and a plurality of non-verbal segments; utilizing a voice activation detection (VAD) process on the audio stream, wherein an output of the VAD process is a digital audio stream of voices corresponding to the plurality of speech segments; inputting the VAD process output into an automatic speech recognition (ASR) process, wherein an output of the ASR process comprises in a plurality of spoken words corresponding to the plurality of speech segments and is devoid of the plurality of non-verbal segments; inputting the ASR process output to a speaker turn detection (STD) process, wherein the STD process generates a plurality of speaker segments that each end at a word boundary of one of the plurality of spoken words; and applying a speaker clustering algorithm to the plurality of speaker segments, wherein the speaker clustering algorithm associates an identifier of one of the human speakers with each of the speaker segments. 2. The method of claim 1 further comprising: generating a textual transcript of the audio stream by outputting each of the speaker segments and the identifier of the associated human speaker. 3. The method of claim 1 further comprising: ingesting the textual transcript into a question answering (QA) system corpus. 4. The method of claim 1 further comprising: identifying a plurality of sets of vocal qualities from the audio stream, wherein each of the sets of vocal qualities corresponds to a different one of the plurality of human speakers; comparing the plurality of sets of vocal qualities to each of the plurality of spoken words; and associating one of the human speakers to each of the words based on the comparison. 5. The method of claim 4 wherein a change from a first of the plurality of human speaker to a second of the plurality of human speakers is limited to word boundaries found in the plurality of spoken words. 6. The method of claim 1 wherein the speaker detection process further comprises: associating a first word from the plurality of spoken words to a first set of vocal qualities; identifying a second word from the plurality of spoken words that is successive to the first word and corresponds to a second set of vocal qualities; inserting a speaker change mark between the first word and the second word in response to determining that the first set of vocal qualities is different from the second set of vocal qualities; adjusting a speaker change probability value in response to determining that the first word is at an end of a question; and maintaining the speaker change mark between the first word and the second word based on the adjusted speaker change probability value. 7. The method of claim 6 further comprising: analyzing a selected one of the speaker segments corresponding to the first word using a language model, wherein the analysis: increases the speaker change probability value in response to the selected speaker segment indicating a statement; increases the speaker change probability value in response to the selected speaker segment indicating a reply; and decreases the speaker change probability value in response to the selected speaker segment indicating a continuation of a previous speaker segment; and identifying the second word based on the speaker change probability value and the comparison of the second word to the first set of vocal qualities. 8. An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; and a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of: receiving an audio stream that comprises both a plurality of speech segments corresponding to a plurality of human speakers and a plurality of non-verbal segments; utilizing a voice activation detection (VAD) process on the audio stream, wherein an output of the VAD process is a digital audio stream of voices corresponding to the plurality of speech segments; inputting the VAD process output into an automatic speech recognition (ASR) process, wherein an output of the ASR process comprises in a plurality of spoken words corresponding to the plurality of speech segments and is devoid of the plurality of non-verbal segments; inputting the ASR process output to a speaker turn detection (STD) process, wherein the STD process generates a plurality of speaker segments that each end at a word boundary of one of the plurality of spoken words; and applying a speaker clustering algorithm to the plurality of speaker segments, wherein the speaker clustering algorithm associates an identifier of one of the human speakers with each of the speaker segments. 9. The information handling system of claim 8 wherein the actions further comprise: generating a textual transcript of the audio stream by outputting each of the speaker segments and the identifier of the associated human speaker. 10. The information handling system of claim 8 wherein the actions further comprise: ingesting the textual transcript into a question answering (QA) system corpus. 11. The information handling system of claim 8 wherein the actions further comprise: identifying a plurality of sets of vocal qualities from the audio stream, wherein each of the sets of vocal qualities corresponds to a different one of the plurality of human speakers; comparing the plurality of sets of vocal qualities to each of the plurality of spoken words; and associating one of the human speakers to each of the words based on the comparison. 12. The information handling system of claim 11 wherein a change from a first of the plurality of human speaker to a second of the plurality of human speakers is limited to word boundaries found in the plurality of spoken words. 13. The information handling system of claim 8 wherein the actions further comprise: associating a first word from the plurality of spoken words to a first set of vocal qualities; identifying a second word from the plurality of spoken words that is successive to the first word and corresponds to a second set of vocal qualities; inserting a speaker change mark between the first word and the second word in response to determining that the first set of vocal qualities is different from the second set of vocal qualities; adjusting a speaker change probability value in response to determining that the first word is at an end of a question; and maintaining the speaker change mark between the first word and the second word based on the adjusted speaker change probability value. 14. The information handling system of claim 13 wherein the actions further comprise: analyzing a selected one of the speaker segments corresponding to the first word using a language model, wherein the analysis: increases the speaker change probability value in response to the selected speaker segment indicating a statement; increases the speaker change probability value in response to the selected speaker segment indicating a reply; and decreases the speaker change probability value in response to the selected speaker segment indicating a continuation of a previous speaker segment; and identifying the second word based on the speaker change probability value and the comparison of the second word to the first set of vocal qualities. 15. A computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system to perform actions comprising: receiving an audio stream that comprises both a plurality of speech segments corresponding to a plurality of human speakers and a plurality of non-verbal segments; utilizing a voice activation detection (VAD) process on the audio stream, wherein an output of the VAD process is a digital audio stream of voices corresponding to the plurality of speech segments; inputting the VAD process output into an automatic speech recognition (ASR) process, wherein an output of the ASR process comprises in a plurality of spoken words corresponding to the plurality of speech segments and is devoid of the plurality of non-verbal segments; inputting the ASR process output to applying a speaker turn detection (STD) process, wherein the STD process generates a plurality of speaker segments that each end at a word boundary of one of the plurality of spoken words; and applying a speaker clustering algorithm to the plurality of speaker segments, wherein the speaker clustering algorithm associates an identifier of one of the human speakers with each of the speaker segments. 16. The computer program product of claim 15 wherein the actions further comprise: generating a textual transcript of the audio stream by outputting each of the speaker segments and the identifier of the associated human speaker; and ingesting the textual transcript into a question answering (QA) system corpus. 17. The computer program product of claim 15 wherein the actions further comprise: identifying a plurality of sets of vocal qualities from the audio stream, wherein each of the sets of vocal qualities corresponds to a different one of the plurality of human speakers; comparing the plurality of sets of vocal qualities to each of the plurality of spoken words; and associating one of the human speakers to each of the words based on the comparison. 18. The computer program product of claim 17 wherein a change from a first of the plurality of human speaker to a second of the plurality of human speakers is limited to word boundaries found in the plurality of spoken words. 19. The computer program product of claim 15 wherein the actions further comprise: associating a first word from the plurality of spoken words to a first set of vocal qualities; identifying a second word from the plurality of spoken words that is successive to the first word and corresponds to a second set of vocal qualities; inserting a speaker change mark between the first word and the second word in response to determining that the first set of vocal qualities is different from the second set of vocal qualities; adjusting a speaker change probability value in response to determining that the first word is at an end of a question; and maintaining the speaker change mark between the first word and the second word based on the adjusted speaker change probability value. 20. The computer program product of claim 19 wherein the actions further comprise: analyzing a selected one of the speaker segments corresponding to the first word using a language model, wherein the analysis: increases the speaker change probability value in response to the selected speaker segment indicating a statement; increases the speaker change probability value in response to the selected speaker segment indicating a reply; and decreases the speaker change probability value in response to the selected speaker segment indicating a continuation of a previous speaker segment; and identifying the second word based on the speaker change probability value and the comparison of the second word to the first set of vocal qualities.
2,600
10,298
10,298
14,211,722
2,685
A subset of codes is efficiently captured from a remote generator and used in a remote access device. The subset of codes contains more codes for common functions than for uncommon functions and enables operation of special functions.
1. A method for capturing a subset of output codes from a rolling code sequence device comprising: actuating a first function of the device to generate a first output code; after the actuation of the first function but before actuating any other function, again actuating the first function of the device to generate a second output code; after the actuation of the first function again, actuating a second function different from the first function of the device to generate a third output code; and storing into a memory the subset of output codes comprising, in sequential order, the first output code, the second output code, and then the third output code. 2. The method of claim 1, wherein the device is remote access device configured to, for any actuated function of a plurality of functions, use a next sequential value from a rolling code generator to generate an output code. 3. The method of claim 1, wherein the first function is one of a lock function and an unlock function. 4. The method of claim 1, further comprising: actuating a special sequence of functions of the device to generate a special sequence of output codes; and storing the special sequence of output codes as a part of the new sequence, wherein the device is a remote access device configured to be paired with a receiver, the receiver configured to use a rolling code generator to verify codes received from the transmitter; and wherein the receiver is configured to resynchronize the rolling code generator upon receiving the special sequence of output codes. 5. The method of claim 1, further comprising: repeatedly actuating sets of functions; and appending output codes to the subset of output codes in the order generated by the device, wherein actuating each set of functions comprises: actuating the first function followed by actuating the first function again; and actuating each function of the plurality of functions at least once. 6. The method of claim 5, wherein each set of functions further comprises another function followed by the another function again. 7. A remote transmitter for sending codes to a receiver comprising: a memory storing a subset of output codes, the subset of output codes comprising, in sequential order, a first code, a second code, and then a third code; an input system configured to receive a selected function of among a plurality of functions operable on the receiver; and an antenna configured for sending codes to the receiver; wherein the first code operates a first function of the plurality of functions; wherein the second code operates the first function of the plurality of functions; and wherein the third code operates a second function of the plurality of functions. 8. The transmitter of claim 7, wherein the first function is one of a lock function and an unlock function. 9. The transmitter of claim 7, wherein the subset of codes further comprises a special sequence of codes, wherein a receiver paired to the transmitter will resynchronize a rolling code generator of the receiver upon the receiver receiving the special sequence of codes. 10. The transmitter of claim 7, wherein each code in the subset of output codes operates one of the plurality of functions of the receiver; wherein the subset of codes comprises a sequence of groups of codes; and wherein each group of codes comprises: for each one function of the plurality of functions, at least one code configured to operate that one function of the receiver; and two consecutive codes configured to operate the first function. 11. The transmitter of claim 10, wherein the transmitter is configured to, upon receiving an input to operate a specific function of the plurality of functions, transmit a specific code from the subset of output codes configured to operate the specific function; and wherein the transmitter is configured to, upon receiving a next input to operate a next function the plurality of functions and before receiving any other input, transmit the earliest code in the sequence after the specific code that operates the next function of the plurality of functions. 12. The transmitter of claim 10, wherein each group of codes further comprises a special sequence of codes that, when received by the receiver, cause the receiver to resynchronize a rolling code generator of the receiver, the rolling code generator used to verify codes received from the transmitter. 13. The transmitter of claim 7, wherein the first code matches a first output code of a rolling code sequence device generated by actuating a first function of the device; wherein the second code matches a second output code of the device generated by again actuating, after the actuation of the first function but before actuating any other function, the first function of the device; and wherein the third code matches a third output code of the device generated by actuating, after the again actuation, a second function of the device. 14. The transmitter of claim 13, wherein the device is configured to, for any actuated function of a plurality of functions, use a next sequential value from a rolling code generator to generate an output code. 15. A method, performed by a transmitter, of transmitting codes to a receiver, the method comprising: upon receiving an input to operate a specific function of a plurality of functions operable by the receiver, transmit a specific code from a sequential subset of codes configured to operate the specific function; and upon receiving a next input to operate a next function of the plurality of functions and before receiving any other input, transmit the earliest code in the sequential subset of codes after the specific code that operates the next function of the plurality of functions, wherein the receiver is configured to use a rolling code generator to verify codes received from the transmitter; wherein the sequential subset of codes comprises, in sequential order, a first code, a second code, and then a third code; wherein the first code operates a first function of the plurality of functions, wherein the second code operates the first function of the plurality of functions, and wherein the third code operates a second function of the plurality of functions, the second function different from the first function. 16. The method of claim 15, wherein the first function is one of a lock function and an unlock function. 17. The method of claim 15, wherein the subset sequential subset of codes comprises a sequence of groups of codes, and each group comprises: for each one function of the plurality of functions, at least one code configured to operate that one function; and two consecutive codes configured to operate the first function. 18. The method of claim 17 further comprising: upon receiving a special sequence of inputs to operate a special sequence of functions, transmitting a special sequence of codes to cause the rolling code generator to synchronize. 19. The method of claim 18, wherein each group of codes comprises special consecutive codes that, when sent to the receiver, will cause the rolling code generator to synchronize. 20. The transmitter of claim 15, wherein the first code matches a first output code of a rolling code sequence device generated by actuating a first function of the device; wherein the second code matches a second output code of the device generated by again actuating, after the actuation of the first function of the device but before actuating any other function of the device, the first function of the device; and wherein the third code matches a third output code of the device generated by actuating, after the again actuation, a second function of the device. 21. A method for capturing a subset of output codes for operating a plurality of functions from a rolling code sequence device comprising: generating a sequence of output codes by repeatedly performing sets of sequential function actuations on the rolling code sequence device; and storing into a memory the subset of output codes comprising the sequence of output codes; wherein each set of sequential function actuations comprises: actuating, for a plurality of times, a common function; and actuating, for a number of times that is less than the plurality, an uncommon function; and actuating each function of a plurality of functions at least once. 22. The method of claim 21, wherein each set of sequential function actuation is a same set of sequential function actuations of a same number of function actuations.
A subset of codes is efficiently captured from a remote generator and used in a remote access device. The subset of codes contains more codes for common functions than for uncommon functions and enables operation of special functions.1. A method for capturing a subset of output codes from a rolling code sequence device comprising: actuating a first function of the device to generate a first output code; after the actuation of the first function but before actuating any other function, again actuating the first function of the device to generate a second output code; after the actuation of the first function again, actuating a second function different from the first function of the device to generate a third output code; and storing into a memory the subset of output codes comprising, in sequential order, the first output code, the second output code, and then the third output code. 2. The method of claim 1, wherein the device is remote access device configured to, for any actuated function of a plurality of functions, use a next sequential value from a rolling code generator to generate an output code. 3. The method of claim 1, wherein the first function is one of a lock function and an unlock function. 4. The method of claim 1, further comprising: actuating a special sequence of functions of the device to generate a special sequence of output codes; and storing the special sequence of output codes as a part of the new sequence, wherein the device is a remote access device configured to be paired with a receiver, the receiver configured to use a rolling code generator to verify codes received from the transmitter; and wherein the receiver is configured to resynchronize the rolling code generator upon receiving the special sequence of output codes. 5. The method of claim 1, further comprising: repeatedly actuating sets of functions; and appending output codes to the subset of output codes in the order generated by the device, wherein actuating each set of functions comprises: actuating the first function followed by actuating the first function again; and actuating each function of the plurality of functions at least once. 6. The method of claim 5, wherein each set of functions further comprises another function followed by the another function again. 7. A remote transmitter for sending codes to a receiver comprising: a memory storing a subset of output codes, the subset of output codes comprising, in sequential order, a first code, a second code, and then a third code; an input system configured to receive a selected function of among a plurality of functions operable on the receiver; and an antenna configured for sending codes to the receiver; wherein the first code operates a first function of the plurality of functions; wherein the second code operates the first function of the plurality of functions; and wherein the third code operates a second function of the plurality of functions. 8. The transmitter of claim 7, wherein the first function is one of a lock function and an unlock function. 9. The transmitter of claim 7, wherein the subset of codes further comprises a special sequence of codes, wherein a receiver paired to the transmitter will resynchronize a rolling code generator of the receiver upon the receiver receiving the special sequence of codes. 10. The transmitter of claim 7, wherein each code in the subset of output codes operates one of the plurality of functions of the receiver; wherein the subset of codes comprises a sequence of groups of codes; and wherein each group of codes comprises: for each one function of the plurality of functions, at least one code configured to operate that one function of the receiver; and two consecutive codes configured to operate the first function. 11. The transmitter of claim 10, wherein the transmitter is configured to, upon receiving an input to operate a specific function of the plurality of functions, transmit a specific code from the subset of output codes configured to operate the specific function; and wherein the transmitter is configured to, upon receiving a next input to operate a next function the plurality of functions and before receiving any other input, transmit the earliest code in the sequence after the specific code that operates the next function of the plurality of functions. 12. The transmitter of claim 10, wherein each group of codes further comprises a special sequence of codes that, when received by the receiver, cause the receiver to resynchronize a rolling code generator of the receiver, the rolling code generator used to verify codes received from the transmitter. 13. The transmitter of claim 7, wherein the first code matches a first output code of a rolling code sequence device generated by actuating a first function of the device; wherein the second code matches a second output code of the device generated by again actuating, after the actuation of the first function but before actuating any other function, the first function of the device; and wherein the third code matches a third output code of the device generated by actuating, after the again actuation, a second function of the device. 14. The transmitter of claim 13, wherein the device is configured to, for any actuated function of a plurality of functions, use a next sequential value from a rolling code generator to generate an output code. 15. A method, performed by a transmitter, of transmitting codes to a receiver, the method comprising: upon receiving an input to operate a specific function of a plurality of functions operable by the receiver, transmit a specific code from a sequential subset of codes configured to operate the specific function; and upon receiving a next input to operate a next function of the plurality of functions and before receiving any other input, transmit the earliest code in the sequential subset of codes after the specific code that operates the next function of the plurality of functions, wherein the receiver is configured to use a rolling code generator to verify codes received from the transmitter; wherein the sequential subset of codes comprises, in sequential order, a first code, a second code, and then a third code; wherein the first code operates a first function of the plurality of functions, wherein the second code operates the first function of the plurality of functions, and wherein the third code operates a second function of the plurality of functions, the second function different from the first function. 16. The method of claim 15, wherein the first function is one of a lock function and an unlock function. 17. The method of claim 15, wherein the subset sequential subset of codes comprises a sequence of groups of codes, and each group comprises: for each one function of the plurality of functions, at least one code configured to operate that one function; and two consecutive codes configured to operate the first function. 18. The method of claim 17 further comprising: upon receiving a special sequence of inputs to operate a special sequence of functions, transmitting a special sequence of codes to cause the rolling code generator to synchronize. 19. The method of claim 18, wherein each group of codes comprises special consecutive codes that, when sent to the receiver, will cause the rolling code generator to synchronize. 20. The transmitter of claim 15, wherein the first code matches a first output code of a rolling code sequence device generated by actuating a first function of the device; wherein the second code matches a second output code of the device generated by again actuating, after the actuation of the first function of the device but before actuating any other function of the device, the first function of the device; and wherein the third code matches a third output code of the device generated by actuating, after the again actuation, a second function of the device. 21. A method for capturing a subset of output codes for operating a plurality of functions from a rolling code sequence device comprising: generating a sequence of output codes by repeatedly performing sets of sequential function actuations on the rolling code sequence device; and storing into a memory the subset of output codes comprising the sequence of output codes; wherein each set of sequential function actuations comprises: actuating, for a plurality of times, a common function; and actuating, for a number of times that is less than the plurality, an uncommon function; and actuating each function of a plurality of functions at least once. 22. The method of claim 21, wherein each set of sequential function actuation is a same set of sequential function actuations of a same number of function actuations.
2,600
10,299
10,299
16,371,838
2,642
Methods and devices are provided for allowing a mobile device (e.g., a key fob or a consumer electronic device, such as a mobile phone, watch, or other wearable device) to interact with a vehicle such that a location of the mobile device can be determined by the vehicle, thereby enabling certain functionality of the vehicle. A device may include both RF antenna(s) and magnetic antenna(s) for determining a location of a mobile device relative to the vehicle. Such a hybrid approach can provide various advantages. Existing magnetic coils on a mobile device (e.g., for charging or communication) may be re-used for distance measurements that are supplemented by the RF measurements. Any device antenna may provide measurements to a machine learning model that determines a region in which the mobile device resides, based on training measurements in the regions.
1. (canceled) 2. A method for determining a location of a mobile device relative to a vehicle, the method comprising: receiving a set of signal values measured using one or more device antennas of the mobile device, the set of signal values providing one or more signal properties of signals from one or more vehicle antennas having various locations in the vehicle, wherein the one or more signal properties of a signal change with respect to a distance between a device antenna of the mobile device that received the signal and a vehicle antenna that emitted the signal; storing a machine learning model that classifies a location of the mobile device as being within a region of a set of regions in a vicinity of the vehicle based on the one or more signal properties of the signals from the one or more vehicle antennas, the machine learning model being trained using various sets of signal values measured at various locations across the set of regions; providing the set of signal values to the machine learning model to obtain a current classification of a particular region of the set of regions, the particular region corresponding to the location of the mobile device. 3. The method of claim 2, further comprising: providing the particular region to a control unit of the vehicle, thereby enabling the control unit to perform a prescribed operation of the vehicle. 4. The method of claim 2, wherein the set of regions includes a first subset of one or more regions outside the vehicle and a second subset of one or more regions outside the vehicle. 5. The method of claim 2, wherein the set of regions includes one or more regions outside the vehicle and one or more regions inside the vehicle. 6. The method of claim 2, further comprising: receiving one or more other values measured by the mobile device, the one or more other values providing one or more physical properties of the mobile device, wherein the machine learning model is trained using the one or more physical properties; and providing the one or more other values to the machine learning model to obtain the current classification of the particular region within which the mobile device is currently located. 7. The method of claim 2, wherein the method is performed by the mobile device, the method further comprising: measuring the signal values using the mobile device. 8. The method of claim 2, wherein the method is performed by a computer of the vehicle. 9. The method of claim 2, wherein the one or more signal properties include a signal strength, a time-of-flight value, or both. 10. The method of claim 2, further comprising: determining a location of the mobile device relative to the vehicle at a plurality of times, thereby obtaining a plurality of locations of the mobile device outside the vehicle; and providing the plurality of locations or a difference in the plurality of locations to a control unit of the vehicle, thereby enabling the control unit to perform a preparatory operation of the vehicle based on a motion of the mobile device toward the vehicle. 11. The method of claim 10, wherein the set of regions includes a first region and a second region that is farther away from the vehicle than the first region, and determining a location of the mobile device relative to the vehicle at a plurality of times comprises: determining a first location of the mobile device at a first time, the first location corresponding the second region; and determining a second location of the mobile device at a second time later than the first time, the second location corresponding to the first region. 12. The method of claim 2, wherein the one or more device antennas of the mobile device include one or more radiofrequency (RF) antennas, include one or more magnetic antennas, or include one or more RF antennas and one or more magnetic antennas. 13. The method of claim 12, wherein the one or more device antennas of the mobile device include one or more RF antennas and one or more magnetic antennas, wherein the one or more RF antennas operate within a range of 3.1 GHz to 10.6 GHz, and wherein the one or more magnetic antennas operate within a range of 100 kHz to 900 kHz.
Methods and devices are provided for allowing a mobile device (e.g., a key fob or a consumer electronic device, such as a mobile phone, watch, or other wearable device) to interact with a vehicle such that a location of the mobile device can be determined by the vehicle, thereby enabling certain functionality of the vehicle. A device may include both RF antenna(s) and magnetic antenna(s) for determining a location of a mobile device relative to the vehicle. Such a hybrid approach can provide various advantages. Existing magnetic coils on a mobile device (e.g., for charging or communication) may be re-used for distance measurements that are supplemented by the RF measurements. Any device antenna may provide measurements to a machine learning model that determines a region in which the mobile device resides, based on training measurements in the regions.1. (canceled) 2. A method for determining a location of a mobile device relative to a vehicle, the method comprising: receiving a set of signal values measured using one or more device antennas of the mobile device, the set of signal values providing one or more signal properties of signals from one or more vehicle antennas having various locations in the vehicle, wherein the one or more signal properties of a signal change with respect to a distance between a device antenna of the mobile device that received the signal and a vehicle antenna that emitted the signal; storing a machine learning model that classifies a location of the mobile device as being within a region of a set of regions in a vicinity of the vehicle based on the one or more signal properties of the signals from the one or more vehicle antennas, the machine learning model being trained using various sets of signal values measured at various locations across the set of regions; providing the set of signal values to the machine learning model to obtain a current classification of a particular region of the set of regions, the particular region corresponding to the location of the mobile device. 3. The method of claim 2, further comprising: providing the particular region to a control unit of the vehicle, thereby enabling the control unit to perform a prescribed operation of the vehicle. 4. The method of claim 2, wherein the set of regions includes a first subset of one or more regions outside the vehicle and a second subset of one or more regions outside the vehicle. 5. The method of claim 2, wherein the set of regions includes one or more regions outside the vehicle and one or more regions inside the vehicle. 6. The method of claim 2, further comprising: receiving one or more other values measured by the mobile device, the one or more other values providing one or more physical properties of the mobile device, wherein the machine learning model is trained using the one or more physical properties; and providing the one or more other values to the machine learning model to obtain the current classification of the particular region within which the mobile device is currently located. 7. The method of claim 2, wherein the method is performed by the mobile device, the method further comprising: measuring the signal values using the mobile device. 8. The method of claim 2, wherein the method is performed by a computer of the vehicle. 9. The method of claim 2, wherein the one or more signal properties include a signal strength, a time-of-flight value, or both. 10. The method of claim 2, further comprising: determining a location of the mobile device relative to the vehicle at a plurality of times, thereby obtaining a plurality of locations of the mobile device outside the vehicle; and providing the plurality of locations or a difference in the plurality of locations to a control unit of the vehicle, thereby enabling the control unit to perform a preparatory operation of the vehicle based on a motion of the mobile device toward the vehicle. 11. The method of claim 10, wherein the set of regions includes a first region and a second region that is farther away from the vehicle than the first region, and determining a location of the mobile device relative to the vehicle at a plurality of times comprises: determining a first location of the mobile device at a first time, the first location corresponding the second region; and determining a second location of the mobile device at a second time later than the first time, the second location corresponding to the first region. 12. The method of claim 2, wherein the one or more device antennas of the mobile device include one or more radiofrequency (RF) antennas, include one or more magnetic antennas, or include one or more RF antennas and one or more magnetic antennas. 13. The method of claim 12, wherein the one or more device antennas of the mobile device include one or more RF antennas and one or more magnetic antennas, wherein the one or more RF antennas operate within a range of 3.1 GHz to 10.6 GHz, and wherein the one or more magnetic antennas operate within a range of 100 kHz to 900 kHz.
2,600