Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,800
10,800
15,043,874
2,616
The digital ink system receives digital ink input from a user and analyzes the digital ink input to collect ink stroke data for the various ink strokes that make up the digital ink. The digital ink system also receives an animation type selection that describes a manner in which the digital ink is to be displayed. The animation type is a dynamic display type, which is display type in which the digital ink changes while the digital ink is displayed. The ink strokes of the digital ink input are displayed using the selected animation type, and are also stored along with the animation type in a digital ink container for subsequent display. The digital ink can be subsequently displayed using the animation type or using a static display type in which the digital ink appears to be stationary while the digital ink is displayed.
1. A method comprising: receiving digital ink input made up of one or more digital ink strokes; receiving an input animation type selection for the digital ink input; collecting ink stroke data for each of the one or more digital ink strokes; displaying, using the input animation type, the one or more digital ink strokes of the digital ink input; adding, to a digital ink container, the ink stroke data and an indication of the input animation type; and communicating the digital ink container to a digital ink store. 2. The method of claim 1, the ink stroke data including coordinates of an input device where the digital ink input occurs. 3. The method of claim 2, the ink stroke data further including pressure applied at the coordinates while the digital ink input occurs. 4. The method of claim 1, further comprising adding to the digital ink container legacy data, the legacy data comprising an animated version of the digital ink that can be displayed. 5. The method of claim 1, the displaying comprising displaying the one or more digital ink strokes using the input animation type as the digital ink input is being received. 6. The method of claim 1, the method further comprising: receiving, after ceasing displaying of the one or more digital ink strokes, a user request to display the digital ink; obtaining the one or more digital ink strokes from the digital ink container; identifying, from the digital ink container, the input animation type; and displaying, in response to the user request, the one or more digital ink strokes using the input animation type. 7. The method of claim 6, the method further comprising: determining whether the input animation type is overridden; and displaying, in response to determining that the input animation type is overridden, the one or more digital ink strokes using an override display type rather than using the input animation type. 8. The method of claim 1, the method further comprising: receiving, after ceasing displaying of the one or more digital ink strokes, a user request to display the digital ink; obtaining the one or more digital ink strokes from the digital ink container; determining an override display type that is a static display type; and displaying, in response to the user request, the one or more digital ink strokes using the override display type rather than using the input animation type. 9. A computing device comprising: one or more processors; and a computer-readable storage medium having stored thereon multiple instructions that, responsive to execution by the one or more processors, cause the one or more processors to perform acts comprising: receiving a user request to display digital ink made up of one or more digital ink strokes; communicating with a digital ink store to obtain a digital ink container including the digital ink; obtaining the one or more digital ink strokes from the digital ink container; identifying, from the digital ink container, an input animation type for the digital ink; and displaying, in response to the user request, the one or more digital ink strokes using the input animation type. 10. The computing device of claim 9, the acts further comprising: determining whether the input animation type is overridden; and in response to determining that the input animation type is overridden: determining an override display type; and displaying the one or more digital ink strokes using the override display type rather than using the input animation type. 11. The computing device of claim 10, the acts further comprising: receiving, after displaying the one or more digital ink strokes using the override display type, a selection of an additional animation type; and displaying the one or more digital ink strokes using the additional animation type rather than using the override display type. 12. The computing device of claim 10, the override display type comprising a static display type. 13. The computing device of claim 9, the acts further comprising: receiving, after displaying the one or more digital ink strokes using the input animation type, a selection of an additional animation type; and displaying the one or more digital ink strokes using the additional animation type rather than using the input animation type. 14. The computing device of claim 9, the animation type being one of a fire animation type, a water animation type, or a smoke animation type. 15. A system comprising: one or more storage devices configured to implement a digital ink store; and a digital ink system configured to receive from an input device an input of digital ink, receive an input animation type selection for the digital ink, collect ink stroke data for each of one or more digital ink strokes of the digital ink, display the one or more digital ink strokes using the input animation type, and add the ink stroke data and an indication of the input animation type to a digital ink container in the digital ink store. 16. The system of claim 15, the ink stroke data including coordinates of the input device where the digital ink input occurs. 17. The system of claim 15, the digital ink system being further configured to add to the digital ink container legacy data, the legacy data comprising an animated version of the digital ink that can be displayed by a device that does not understand the input animation type. 18. The system of claim 15, the digital ink system being further configured to: receive, after ceasing display of the one or more digital ink strokes, a user request to display the digital ink; obtain the one or more digital ink strokes from the digital ink container; identify, from the digital ink container, the input animation type; and display, in response to the user request, the one or more digital ink strokes using the input animation type. 19. The system of claim 18, the digital ink system being further configured to: determine whether the input animation type is overridden; and display, in response to determining that the input animation type is overridden, the one or more digital ink strokes using an override display type rather than using the input animation type. 20. The system of claim 15, the digital ink system being further configured to: receive, after ceasing display of the one or more digital ink strokes, a user request to display the digital ink; obtain the one or more digital ink strokes from the digital ink container; determine an override display type that is a static display type; and display, in response to the user request, the one or more digital ink strokes using the override display type rather than using the input animation type.
The digital ink system receives digital ink input from a user and analyzes the digital ink input to collect ink stroke data for the various ink strokes that make up the digital ink. The digital ink system also receives an animation type selection that describes a manner in which the digital ink is to be displayed. The animation type is a dynamic display type, which is display type in which the digital ink changes while the digital ink is displayed. The ink strokes of the digital ink input are displayed using the selected animation type, and are also stored along with the animation type in a digital ink container for subsequent display. The digital ink can be subsequently displayed using the animation type or using a static display type in which the digital ink appears to be stationary while the digital ink is displayed.1. A method comprising: receiving digital ink input made up of one or more digital ink strokes; receiving an input animation type selection for the digital ink input; collecting ink stroke data for each of the one or more digital ink strokes; displaying, using the input animation type, the one or more digital ink strokes of the digital ink input; adding, to a digital ink container, the ink stroke data and an indication of the input animation type; and communicating the digital ink container to a digital ink store. 2. The method of claim 1, the ink stroke data including coordinates of an input device where the digital ink input occurs. 3. The method of claim 2, the ink stroke data further including pressure applied at the coordinates while the digital ink input occurs. 4. The method of claim 1, further comprising adding to the digital ink container legacy data, the legacy data comprising an animated version of the digital ink that can be displayed. 5. The method of claim 1, the displaying comprising displaying the one or more digital ink strokes using the input animation type as the digital ink input is being received. 6. The method of claim 1, the method further comprising: receiving, after ceasing displaying of the one or more digital ink strokes, a user request to display the digital ink; obtaining the one or more digital ink strokes from the digital ink container; identifying, from the digital ink container, the input animation type; and displaying, in response to the user request, the one or more digital ink strokes using the input animation type. 7. The method of claim 6, the method further comprising: determining whether the input animation type is overridden; and displaying, in response to determining that the input animation type is overridden, the one or more digital ink strokes using an override display type rather than using the input animation type. 8. The method of claim 1, the method further comprising: receiving, after ceasing displaying of the one or more digital ink strokes, a user request to display the digital ink; obtaining the one or more digital ink strokes from the digital ink container; determining an override display type that is a static display type; and displaying, in response to the user request, the one or more digital ink strokes using the override display type rather than using the input animation type. 9. A computing device comprising: one or more processors; and a computer-readable storage medium having stored thereon multiple instructions that, responsive to execution by the one or more processors, cause the one or more processors to perform acts comprising: receiving a user request to display digital ink made up of one or more digital ink strokes; communicating with a digital ink store to obtain a digital ink container including the digital ink; obtaining the one or more digital ink strokes from the digital ink container; identifying, from the digital ink container, an input animation type for the digital ink; and displaying, in response to the user request, the one or more digital ink strokes using the input animation type. 10. The computing device of claim 9, the acts further comprising: determining whether the input animation type is overridden; and in response to determining that the input animation type is overridden: determining an override display type; and displaying the one or more digital ink strokes using the override display type rather than using the input animation type. 11. The computing device of claim 10, the acts further comprising: receiving, after displaying the one or more digital ink strokes using the override display type, a selection of an additional animation type; and displaying the one or more digital ink strokes using the additional animation type rather than using the override display type. 12. The computing device of claim 10, the override display type comprising a static display type. 13. The computing device of claim 9, the acts further comprising: receiving, after displaying the one or more digital ink strokes using the input animation type, a selection of an additional animation type; and displaying the one or more digital ink strokes using the additional animation type rather than using the input animation type. 14. The computing device of claim 9, the animation type being one of a fire animation type, a water animation type, or a smoke animation type. 15. A system comprising: one or more storage devices configured to implement a digital ink store; and a digital ink system configured to receive from an input device an input of digital ink, receive an input animation type selection for the digital ink, collect ink stroke data for each of one or more digital ink strokes of the digital ink, display the one or more digital ink strokes using the input animation type, and add the ink stroke data and an indication of the input animation type to a digital ink container in the digital ink store. 16. The system of claim 15, the ink stroke data including coordinates of the input device where the digital ink input occurs. 17. The system of claim 15, the digital ink system being further configured to add to the digital ink container legacy data, the legacy data comprising an animated version of the digital ink that can be displayed by a device that does not understand the input animation type. 18. The system of claim 15, the digital ink system being further configured to: receive, after ceasing display of the one or more digital ink strokes, a user request to display the digital ink; obtain the one or more digital ink strokes from the digital ink container; identify, from the digital ink container, the input animation type; and display, in response to the user request, the one or more digital ink strokes using the input animation type. 19. The system of claim 18, the digital ink system being further configured to: determine whether the input animation type is overridden; and display, in response to determining that the input animation type is overridden, the one or more digital ink strokes using an override display type rather than using the input animation type. 20. The system of claim 15, the digital ink system being further configured to: receive, after ceasing display of the one or more digital ink strokes, a user request to display the digital ink; obtain the one or more digital ink strokes from the digital ink container; determine an override display type that is a static display type; and display, in response to the user request, the one or more digital ink strokes using the override display type rather than using the input animation type.
2,600
10,801
10,801
16,033,348
2,699
Systems for providing efficient manufacturing of paper, sheet, and/or box products of varying size and structure, often with pre-applied print (“pre-print”), are provided herein. One or more controllers can be used to aggregate orders and information to prepare one or more control plans (e.g., reel maps, reel plans, etc.) for processing a roll of web product through the manufacturing process. The control plan may include a set of instructions for operating one or more systems within the manufacturing process to form the desired finished paper-based product. In such a regard, efficient manufacturing of various paper-based products, including corrugated boxes, folded carton, labels, flexible paper, industrial bags, plates, cups, décor, and many others, can be achieved. Further, efficient customer ordering/tracking, job aggregation, print imposition, corrugator planning, and tracking and adjustments throughout the manufacturing process are contemplated.
1. A system for controlling manufacturing of one or more paper-based products, the system comprising: at least one printer configured to print on a roll of web product to form a roll of printed web product; at least one sheet formation/processing system that includes at least one cutting arrangement that is configured to cut a portion of the roll of printed web product; at least one controller configured to: generate a control plan associated with the roll of web product for at least one order from among a plurality of orders for the one or more paper-based products, wherein each order of the plurality of orders comprises at least one design for at least one paper-based product, wherein the at least one design includes one or more printed images, wherein the control plan includes at least a set of first order instructions for forming one or more first paper-based products from the roll of web product for fulfilling a first order, wherein the set of first order instructions comprises first plan instructions for forming one or more first sheet or box structure areas on the roll of web product, wherein each of the first sheet or box structure areas include a first printed image and are used to form the one or more first paper-based products; provide the control plan to the printer for controlling operation of the printer, wherein the control plan includes printing instructions to control operation of the printer to cause the first printed image to print at a desired position within each of the first sheet or box structure areas on the roll of web product to form the roll of printed web product; and provide the control plan to the sheet formation/processing system for controlling operation of the sheet formation/processing system, wherein the control plan includes cutting instructions to control operation of the cutting arrangement to cause one or more first sheet or box structures with the first printed image therein to be cut from the roll of printed web product, wherein the one or more first sheet or box structures with the first printed image therein are used to form the one or more first paper-based products for fulfilling the first order. 2. The system of claim 1, wherein the at least one sheet formation/processing system further includes a web formation device that is configured to use the roll of printed web product to form an updated web that includes at least one additional layer of material, wherein the updated web is used to form the one or more first paper-based products. 3. The system of claim 1 further comprising at least one finishing system for performing finishing operations on the one or more paper-based products, wherein the at least one controller is configured to provide the control plan to the finishing system for controlling operation of the finishing system, wherein the control plan includes finishing instructions to control operation of the finishing system to cause the one or more first paper-based products to be formed using the one or more first sheet or box structures. 4. A system for controlling manufacturing of one or more paper-based products, the system comprising: at least one controller configured to: generate a control plan associated with a roll of web product for at least one order from among a plurality of orders for the one or more paper-based products, wherein each order of the plurality of orders comprises at least one design for at least one paper-based product, wherein the at least one design includes one or more printed images, wherein the control plan includes at least a set of first order instructions for forming one or more first paper-based products from the roll of web product for fulfilling a first order, wherein the set of first order instructions comprises first plan instructions for forming one or more first sheet or box structure areas on the roll of web product, wherein each of the first sheet or box structure areas include a first printed image and are used to form the one or more first paper-based products; provide the control plan to at least one printer for controlling operation of the printer, wherein the control plan includes printing instructions to control operation of the printer to cause the first printed image to print at a desired position within each of the first sheet or box structure areas on the roll of web product to form a roll of printed web product; and provide the control plan to at least one sheet formation/processing system for controlling operation of the sheet formation/processing system, wherein the at least one sheet formation/processing system includes at least one cutting arrangement that is configured to cut a portion of the roll of printed web product, wherein the control plan includes cutting instructions to control operation of the cutting arrangement to cause one or more first sheet or box structures with the first printed image therein to be cut from the roll of printed web product, wherein the one or more first sheet or box structures with the first printed image therein are used to form the one or more first paper-based products for fulfilling the first order. 5. The system of claim 4, wherein the at least one sheet formation/processing system further includes a web formation device that is configured to use the roll of printed web product to form an updated web that includes at least one additional layer of material, wherein the updated web is used to form the one or more first paper-based products. 6. The system of claim 4, wherein the at least one sheet formation/processing system comprises a corrugator that is configured to form corrugated board web using the roll of printed web product, wherein the first paper-based product is a corrugated box with the first printed image. 7. The system of claim 4, wherein the at least one controller is configured to provide the control plan to at least one finishing system for controlling operation of the finishing system, wherein the control plan includes finishing instructions to control operation of the finishing system to cause the one or more first paper-based products to be formed using the one or more first sheet or box structures. 8. The system of claim 7, wherein the at least one controller is configured to determine if the one or more first paper-based products formed by the finishing system satisfies the first order. 9. The system of claim 7, wherein the finishing system comprises a folding device that is configured to fold and glue the one or more first sheet or box structures for use in formation of the one or more first paper-based products, wherein the one or more first paper-based products are one or more folded cartons that each include at least the first printed image. 10. The system of claim 7, wherein the finishing system comprises: a tubing device that is configured to form the one or more first sheet or box structures into a tube; and a bottoming device that is configured to form a bottom of the tube for use in formation of the one or more first paper-based products, wherein the one or more first paper-based products are one or more industrial bags that each include at least the first printed image. 11. The system of claim 7, wherein the finishing system comprises a cup forming device that is configured to form the one or more first sheet or box structures into one or more cups, wherein the one or more first paper-based products are one or more cups that each include the first printed image. 12. The system of claim 7, wherein the finishing system comprises a plate forming device that is configured to form the one or more first sheet or box structures into one or more plates, wherein the one or more first paper-based products are one or more plates that each include the first printed image. 13. The system of claim 4, wherein the at least one controller is configured to provide the control plan to at least one reel editor for controlling operation of the reel editor, wherein the control plan includes editing instructions to control operation of the reel editor to cause one or more portions of the roll of printed web product to be removed. 14. The system of claim 13, wherein the at least one controller is configured to update the control plan based on the one or more portions of the roll of printed web product that were removed. 15. The system of claim 4, wherein the at least one controller is configured to receive one or more of the plurality of orders for the one or more paper-based products. 16. The system of claim 4, wherein the at least one controller is configured to track the one or more first sheet or box structures after formation. 17. The system of claim 4, wherein the at least one controller is configured to determine if the one or more first sheet or box structures formed by the sheet formation processing system satisfies the first order. 18. The system of claim 4, wherein the control plan for the roll of web product includes at least a set of second order instructions for forming one or more second paper-based products from the roll of web product for fulfilling a second order, wherein the set of second order instructions comprises second plan instructions for forming one or more second sheet or box structure areas on the roll of web product, wherein each of the second sheet or box structure areas include a second printed image and are used to form the one or more second paper-based products. 19. The system of claim 18, wherein the first plan instructions and the second plan instructions are configured to cause the one or more first sheet or box structure areas and the one or more second sheet or box structure areas to be formed adjacent to each other in a width direction on the roll of web product. 20. A method for controlling manufacturing of one or more paper-based products, the method comprising: generating a control plan associated with a roll of web product for at least one order from among a plurality of orders for the one or more paper-based products, wherein each order of the plurality of orders comprises at least one design for at least one paper-based product, wherein the at least one design includes one or more printed images, wherein the control plan includes at least a set of first order instructions for forming one or more first paper-based products from the roll of web product for fulfilling a first order, wherein the set of first order instructions comprises first plan instructions for forming one or more first sheet or box structure areas on the roll of web product, wherein each of the first sheet or box structure areas include a first printed image and are used to form the one or more first paper-based products; controlling, using the control plan, at least one printer, wherein the control plan includes printing instructions to control operation of the printer to cause the first printed image to print at a desired position within each of the first sheet or box structure areas on the roll of web product to form a roll of printed web product; and controlling, using the control plan, at least one sheet formation/processing system, wherein the at least one sheet formation/processing system includes at least one cutting arrangement that is configured to cut a portion of the roll of printed web product, wherein the control plan includes cutting instructions to control operation of the cutting arrangement to cause one or more first sheet or box structures with the first printed image therein to be cut from the roll of printed web product, wherein the one or more first sheet or box structures with the first printed image therein are used to form the one or more first paper-based products for fulfilling the first order.
Systems for providing efficient manufacturing of paper, sheet, and/or box products of varying size and structure, often with pre-applied print (“pre-print”), are provided herein. One or more controllers can be used to aggregate orders and information to prepare one or more control plans (e.g., reel maps, reel plans, etc.) for processing a roll of web product through the manufacturing process. The control plan may include a set of instructions for operating one or more systems within the manufacturing process to form the desired finished paper-based product. In such a regard, efficient manufacturing of various paper-based products, including corrugated boxes, folded carton, labels, flexible paper, industrial bags, plates, cups, décor, and many others, can be achieved. Further, efficient customer ordering/tracking, job aggregation, print imposition, corrugator planning, and tracking and adjustments throughout the manufacturing process are contemplated.1. A system for controlling manufacturing of one or more paper-based products, the system comprising: at least one printer configured to print on a roll of web product to form a roll of printed web product; at least one sheet formation/processing system that includes at least one cutting arrangement that is configured to cut a portion of the roll of printed web product; at least one controller configured to: generate a control plan associated with the roll of web product for at least one order from among a plurality of orders for the one or more paper-based products, wherein each order of the plurality of orders comprises at least one design for at least one paper-based product, wherein the at least one design includes one or more printed images, wherein the control plan includes at least a set of first order instructions for forming one or more first paper-based products from the roll of web product for fulfilling a first order, wherein the set of first order instructions comprises first plan instructions for forming one or more first sheet or box structure areas on the roll of web product, wherein each of the first sheet or box structure areas include a first printed image and are used to form the one or more first paper-based products; provide the control plan to the printer for controlling operation of the printer, wherein the control plan includes printing instructions to control operation of the printer to cause the first printed image to print at a desired position within each of the first sheet or box structure areas on the roll of web product to form the roll of printed web product; and provide the control plan to the sheet formation/processing system for controlling operation of the sheet formation/processing system, wherein the control plan includes cutting instructions to control operation of the cutting arrangement to cause one or more first sheet or box structures with the first printed image therein to be cut from the roll of printed web product, wherein the one or more first sheet or box structures with the first printed image therein are used to form the one or more first paper-based products for fulfilling the first order. 2. The system of claim 1, wherein the at least one sheet formation/processing system further includes a web formation device that is configured to use the roll of printed web product to form an updated web that includes at least one additional layer of material, wherein the updated web is used to form the one or more first paper-based products. 3. The system of claim 1 further comprising at least one finishing system for performing finishing operations on the one or more paper-based products, wherein the at least one controller is configured to provide the control plan to the finishing system for controlling operation of the finishing system, wherein the control plan includes finishing instructions to control operation of the finishing system to cause the one or more first paper-based products to be formed using the one or more first sheet or box structures. 4. A system for controlling manufacturing of one or more paper-based products, the system comprising: at least one controller configured to: generate a control plan associated with a roll of web product for at least one order from among a plurality of orders for the one or more paper-based products, wherein each order of the plurality of orders comprises at least one design for at least one paper-based product, wherein the at least one design includes one or more printed images, wherein the control plan includes at least a set of first order instructions for forming one or more first paper-based products from the roll of web product for fulfilling a first order, wherein the set of first order instructions comprises first plan instructions for forming one or more first sheet or box structure areas on the roll of web product, wherein each of the first sheet or box structure areas include a first printed image and are used to form the one or more first paper-based products; provide the control plan to at least one printer for controlling operation of the printer, wherein the control plan includes printing instructions to control operation of the printer to cause the first printed image to print at a desired position within each of the first sheet or box structure areas on the roll of web product to form a roll of printed web product; and provide the control plan to at least one sheet formation/processing system for controlling operation of the sheet formation/processing system, wherein the at least one sheet formation/processing system includes at least one cutting arrangement that is configured to cut a portion of the roll of printed web product, wherein the control plan includes cutting instructions to control operation of the cutting arrangement to cause one or more first sheet or box structures with the first printed image therein to be cut from the roll of printed web product, wherein the one or more first sheet or box structures with the first printed image therein are used to form the one or more first paper-based products for fulfilling the first order. 5. The system of claim 4, wherein the at least one sheet formation/processing system further includes a web formation device that is configured to use the roll of printed web product to form an updated web that includes at least one additional layer of material, wherein the updated web is used to form the one or more first paper-based products. 6. The system of claim 4, wherein the at least one sheet formation/processing system comprises a corrugator that is configured to form corrugated board web using the roll of printed web product, wherein the first paper-based product is a corrugated box with the first printed image. 7. The system of claim 4, wherein the at least one controller is configured to provide the control plan to at least one finishing system for controlling operation of the finishing system, wherein the control plan includes finishing instructions to control operation of the finishing system to cause the one or more first paper-based products to be formed using the one or more first sheet or box structures. 8. The system of claim 7, wherein the at least one controller is configured to determine if the one or more first paper-based products formed by the finishing system satisfies the first order. 9. The system of claim 7, wherein the finishing system comprises a folding device that is configured to fold and glue the one or more first sheet or box structures for use in formation of the one or more first paper-based products, wherein the one or more first paper-based products are one or more folded cartons that each include at least the first printed image. 10. The system of claim 7, wherein the finishing system comprises: a tubing device that is configured to form the one or more first sheet or box structures into a tube; and a bottoming device that is configured to form a bottom of the tube for use in formation of the one or more first paper-based products, wherein the one or more first paper-based products are one or more industrial bags that each include at least the first printed image. 11. The system of claim 7, wherein the finishing system comprises a cup forming device that is configured to form the one or more first sheet or box structures into one or more cups, wherein the one or more first paper-based products are one or more cups that each include the first printed image. 12. The system of claim 7, wherein the finishing system comprises a plate forming device that is configured to form the one or more first sheet or box structures into one or more plates, wherein the one or more first paper-based products are one or more plates that each include the first printed image. 13. The system of claim 4, wherein the at least one controller is configured to provide the control plan to at least one reel editor for controlling operation of the reel editor, wherein the control plan includes editing instructions to control operation of the reel editor to cause one or more portions of the roll of printed web product to be removed. 14. The system of claim 13, wherein the at least one controller is configured to update the control plan based on the one or more portions of the roll of printed web product that were removed. 15. The system of claim 4, wherein the at least one controller is configured to receive one or more of the plurality of orders for the one or more paper-based products. 16. The system of claim 4, wherein the at least one controller is configured to track the one or more first sheet or box structures after formation. 17. The system of claim 4, wherein the at least one controller is configured to determine if the one or more first sheet or box structures formed by the sheet formation processing system satisfies the first order. 18. The system of claim 4, wherein the control plan for the roll of web product includes at least a set of second order instructions for forming one or more second paper-based products from the roll of web product for fulfilling a second order, wherein the set of second order instructions comprises second plan instructions for forming one or more second sheet or box structure areas on the roll of web product, wherein each of the second sheet or box structure areas include a second printed image and are used to form the one or more second paper-based products. 19. The system of claim 18, wherein the first plan instructions and the second plan instructions are configured to cause the one or more first sheet or box structure areas and the one or more second sheet or box structure areas to be formed adjacent to each other in a width direction on the roll of web product. 20. A method for controlling manufacturing of one or more paper-based products, the method comprising: generating a control plan associated with a roll of web product for at least one order from among a plurality of orders for the one or more paper-based products, wherein each order of the plurality of orders comprises at least one design for at least one paper-based product, wherein the at least one design includes one or more printed images, wherein the control plan includes at least a set of first order instructions for forming one or more first paper-based products from the roll of web product for fulfilling a first order, wherein the set of first order instructions comprises first plan instructions for forming one or more first sheet or box structure areas on the roll of web product, wherein each of the first sheet or box structure areas include a first printed image and are used to form the one or more first paper-based products; controlling, using the control plan, at least one printer, wherein the control plan includes printing instructions to control operation of the printer to cause the first printed image to print at a desired position within each of the first sheet or box structure areas on the roll of web product to form a roll of printed web product; and controlling, using the control plan, at least one sheet formation/processing system, wherein the at least one sheet formation/processing system includes at least one cutting arrangement that is configured to cut a portion of the roll of printed web product, wherein the control plan includes cutting instructions to control operation of the cutting arrangement to cause one or more first sheet or box structures with the first printed image therein to be cut from the roll of printed web product, wherein the one or more first sheet or box structures with the first printed image therein are used to form the one or more first paper-based products for fulfilling the first order.
2,600
10,802
10,802
15,676,467
2,657
A system for assessing effects of lightning strikes upon a specific aircraft based on a plurality of field reports is disclosed. The system includes one or more processors and a memory coupled to the processors, the memory storing data into a database and program code that, when executed by the one or more processors, causes the system to receive as input refined data extracted from the plurality of field reports. The refined data includes text indicating a plurality of lightning strikes upon the specific aircraft and at least a portion of the text is structured into a sentence format. The system parses a unique sentence contained within the refined data to create a dependency parse graph that defines grammatical relationships between at least one word indicating a specific lightning strike upon the specific aircraft with remaining words within the unique sentence. The unique sentence indicates the specific lightning strike.
1. A system (10) for assessing effects of lightning strikes upon a specific aircraft based on a plurality of field reports (20), the system comprising: one or more processors (185); and a memory (186) coupled to the one or more processors (185), the memory (186) storing data into a database (196) and program code that, when executed by the one or more processors (185), causes the system (10) to: receive as input refined data (76) extracted from the plurality of field reports (20), wherein the refined data (76) includes text indicating a plurality of lightning strikes upon the specific aircraft and at least a portion of the text is structured into a sentence format; parse a unique sentence contained within the refined data (76) to create a dependency parse graph (80) that defines grammatical relationships between at least one word indicating a specific lightning strike upon the specific aircraft with remaining words within the unique sentence, wherein the unique sentence is indicative of the specific lightning strike; and determine a component of the specific aircraft affected by the specific lightning strike, a location of the specific lightning strike upon the specific aircraft, and at least one word indicating the specific lightning strike based on the grammatical relationships defined by the dependency parse graph (80). 2. The system (10) of claim 1, wherein the system (10) determines an effect of the specific lightning strike upon the component of the specific aircraft. 3. The system (10) of claim 2, wherein the system (10) determines that the effect of the specific lightning strike upon the component of the specific aircraft has been removed. 4. The system (10) of claim 1, wherein the system (10) determines that there was no effect to the component from the specific lightning strike based on a negation relationship defined by the dependency parse graph (80). 5. The system (10) of claim 1, wherein the component of the specific aircraft affected by the specific lightning strike, the location of the specific lightning strike upon the specific aircraft, and the at least one word indicating the specific lightning strike are expressed as an output tuple including three elements. 6. The system (10) of claim 1, wherein the refined data (76) is determined by tokenizing input data from the plurality of field reports (20), removing punctuation from tokenized input data, performing a spell check on the tokenized input data, and replacing abbreviated words in the tokenized input data with a compete form of an abbreviated word. 7. The system (10) of claim 6, wherein the refined data (76) is further determined by retaining specific observations within the tokenized input data that indicate a particular lighting strike and other observations unrelated to lightning strikes are discarded. 8. The system (10) of claim 6, wherein the refined data (76) is further determined by correcting a spelling of words contained within the tokenized input data that represent a specific aircraft component. 9. The system (10) of claim 6, wherein the spell check is executed based on a context-sensitive approach, and wherein a misspelled word is corrected based on bigrams created using historical data related to the specific aircraft. 10. The system (10) of claim 1, wherein the system (10) generates a final report (32) that provides a pictorial image summarizing a number of times lightning has struck various components of a model of aircraft (100) associated with the specific aircraft. 11. The system (10) of claim 1, wherein the plurality of field reports (20) summarize observations by an aircraft's pilot and crew during flight and maintenance records for the specific aircraft. 12. A method for assessing effects of lightning strikes upon a specific aircraft based on a plurality of field reports (20), the method comprising: receiving, by a computer (184), refined data (76) extracted from the plurality of field reports (20), wherein the refined data (76) includes text indicating a plurality of lightning strikes upon the specific aircraft and at least a portion of the text is structured into a sentence format; parsing, by the computer (184), a unique sentence contained within the refined data (76) to create a dependency parse graph (80) that defines grammatical relationships between at least one word indicating a specific lightning strike upon the specific aircraft with remaining words within the unique sentence; and determining a component of the specific aircraft affected by the specific lightning strike, a location of the specific lightning strike upon the specific aircraft, and at least one word indicating the specific lightning strike based on the grammatical relationships defined by the dependency parse graph (80). 13. The method of claim 12, comprising determining an effect of the specific lightning strike upon the component of the specific aircraft. 14. The method of claim 13, comprising determining the effect of the specific lightning strike upon the component of the specific aircraft has been removed. 15. The method of claim 12, comprising determining that there was no effect to the component from the specific lightning strike based on a negation relationship defined by the dependency parse graph (80). 16. The method of claim 12, wherein the component of the specific aircraft affected by the specific lightning strike, the location of the specific lightning strike upon the specific aircraft, and the at least one word indicating the specific lightning strike are expressed as an output tuple including three elements. 17. The method of claim 12, comprising determining the refined data (76) by tokenizing input data from the plurality of field reports (20), removing punctuation from tokenized input data, performing a spell check on the tokenized input data, and replacing abbreviated words in the tokenized input data with a compete form of an abbreviated word. 18. The method of claim 17, further determining the refined data (76) by retaining specific observations within the tokenized input data that indicate a particular lighting strike and other observations unrelated to lightning strikes are discarded. 19. The method of claim 17, further determining the refined data (76) by correcting a spelling of words contained within the tokenized input data that represent a specific aircraft component. 20. The method of claim 17, comprising executing the spell check based on a context-sensitive approach, and wherein a misspelled word is corrected based on bigrams created using historical data related to the specific aircraft.
A system for assessing effects of lightning strikes upon a specific aircraft based on a plurality of field reports is disclosed. The system includes one or more processors and a memory coupled to the processors, the memory storing data into a database and program code that, when executed by the one or more processors, causes the system to receive as input refined data extracted from the plurality of field reports. The refined data includes text indicating a plurality of lightning strikes upon the specific aircraft and at least a portion of the text is structured into a sentence format. The system parses a unique sentence contained within the refined data to create a dependency parse graph that defines grammatical relationships between at least one word indicating a specific lightning strike upon the specific aircraft with remaining words within the unique sentence. The unique sentence indicates the specific lightning strike.1. A system (10) for assessing effects of lightning strikes upon a specific aircraft based on a plurality of field reports (20), the system comprising: one or more processors (185); and a memory (186) coupled to the one or more processors (185), the memory (186) storing data into a database (196) and program code that, when executed by the one or more processors (185), causes the system (10) to: receive as input refined data (76) extracted from the plurality of field reports (20), wherein the refined data (76) includes text indicating a plurality of lightning strikes upon the specific aircraft and at least a portion of the text is structured into a sentence format; parse a unique sentence contained within the refined data (76) to create a dependency parse graph (80) that defines grammatical relationships between at least one word indicating a specific lightning strike upon the specific aircraft with remaining words within the unique sentence, wherein the unique sentence is indicative of the specific lightning strike; and determine a component of the specific aircraft affected by the specific lightning strike, a location of the specific lightning strike upon the specific aircraft, and at least one word indicating the specific lightning strike based on the grammatical relationships defined by the dependency parse graph (80). 2. The system (10) of claim 1, wherein the system (10) determines an effect of the specific lightning strike upon the component of the specific aircraft. 3. The system (10) of claim 2, wherein the system (10) determines that the effect of the specific lightning strike upon the component of the specific aircraft has been removed. 4. The system (10) of claim 1, wherein the system (10) determines that there was no effect to the component from the specific lightning strike based on a negation relationship defined by the dependency parse graph (80). 5. The system (10) of claim 1, wherein the component of the specific aircraft affected by the specific lightning strike, the location of the specific lightning strike upon the specific aircraft, and the at least one word indicating the specific lightning strike are expressed as an output tuple including three elements. 6. The system (10) of claim 1, wherein the refined data (76) is determined by tokenizing input data from the plurality of field reports (20), removing punctuation from tokenized input data, performing a spell check on the tokenized input data, and replacing abbreviated words in the tokenized input data with a compete form of an abbreviated word. 7. The system (10) of claim 6, wherein the refined data (76) is further determined by retaining specific observations within the tokenized input data that indicate a particular lighting strike and other observations unrelated to lightning strikes are discarded. 8. The system (10) of claim 6, wherein the refined data (76) is further determined by correcting a spelling of words contained within the tokenized input data that represent a specific aircraft component. 9. The system (10) of claim 6, wherein the spell check is executed based on a context-sensitive approach, and wherein a misspelled word is corrected based on bigrams created using historical data related to the specific aircraft. 10. The system (10) of claim 1, wherein the system (10) generates a final report (32) that provides a pictorial image summarizing a number of times lightning has struck various components of a model of aircraft (100) associated with the specific aircraft. 11. The system (10) of claim 1, wherein the plurality of field reports (20) summarize observations by an aircraft's pilot and crew during flight and maintenance records for the specific aircraft. 12. A method for assessing effects of lightning strikes upon a specific aircraft based on a plurality of field reports (20), the method comprising: receiving, by a computer (184), refined data (76) extracted from the plurality of field reports (20), wherein the refined data (76) includes text indicating a plurality of lightning strikes upon the specific aircraft and at least a portion of the text is structured into a sentence format; parsing, by the computer (184), a unique sentence contained within the refined data (76) to create a dependency parse graph (80) that defines grammatical relationships between at least one word indicating a specific lightning strike upon the specific aircraft with remaining words within the unique sentence; and determining a component of the specific aircraft affected by the specific lightning strike, a location of the specific lightning strike upon the specific aircraft, and at least one word indicating the specific lightning strike based on the grammatical relationships defined by the dependency parse graph (80). 13. The method of claim 12, comprising determining an effect of the specific lightning strike upon the component of the specific aircraft. 14. The method of claim 13, comprising determining the effect of the specific lightning strike upon the component of the specific aircraft has been removed. 15. The method of claim 12, comprising determining that there was no effect to the component from the specific lightning strike based on a negation relationship defined by the dependency parse graph (80). 16. The method of claim 12, wherein the component of the specific aircraft affected by the specific lightning strike, the location of the specific lightning strike upon the specific aircraft, and the at least one word indicating the specific lightning strike are expressed as an output tuple including three elements. 17. The method of claim 12, comprising determining the refined data (76) by tokenizing input data from the plurality of field reports (20), removing punctuation from tokenized input data, performing a spell check on the tokenized input data, and replacing abbreviated words in the tokenized input data with a compete form of an abbreviated word. 18. The method of claim 17, further determining the refined data (76) by retaining specific observations within the tokenized input data that indicate a particular lighting strike and other observations unrelated to lightning strikes are discarded. 19. The method of claim 17, further determining the refined data (76) by correcting a spelling of words contained within the tokenized input data that represent a specific aircraft component. 20. The method of claim 17, comprising executing the spell check based on a context-sensitive approach, and wherein a misspelled word is corrected based on bigrams created using historical data related to the specific aircraft.
2,600
10,803
10,803
16,040,437
2,688
Provided herein are platforms for determining a real-time human behavior analysis of an unmanned vehicle by a plurality of autonomous or semi-autonomous land vehicles through infrastructure recognition and assessment. The platforms determine a real-time parking status for a plurality of parking locations, a platform for detecting a traffic violation by a manned vehicle at a roadway location, and a platform for monitoring security of a physical location.
1. A platform for determining a real-time parking status for a plurality of parking locations, the platform comprising: a) a plurality of autonomous or semi-autonomous land vehicles, each autonomous or semi-autonomous land vehicle comprising: (i) one or more sensors configured to collect a first sensed data corresponding to a parking location; and (ii) a communication device; and b) the platform further comprising a processor configured to provide an application comprising: (i) a database comprising the plurality of parking locations; (ii) a communication module receiving the first sensed data via the communication device; and (iii) a parking spot recognition module (1) applying a parking assessment algorithm to determine the real-time parking status of the parking location based at least on the first sensed data, and (2) transmitting the parking status to the database. 2. The platform of claim 1, further configured to detect a parking violation, wherein: a) the parking location is associated with at least one parking regulation; and b) the application further comprises a violation detection module applying a violation assessment algorithm to detect the parking violation based at least on the parking location, the at least one parking regulation associated with the parking location, and one or more of: the first sensed data, and the real-time parking status of the parking location. 3. The platform of claim 2, further configured to identify the manned vehicle, wherein: a) the one or more sensors are further configured to collect a second sensed data corresponding to an identification of a manned vehicle associated with the parking location; b) the communication module further receives the second sensed data via the communication device; and c) the application further comprises a vehicle identification module applying a vehicle identification algorithm to identify the manned vehicle based at least on the second sensed data. 4. The platform of claim 3, wherein the processor configured to provide an application further comprises a vehicle identity identification module applying a vehicle identification algorithm to identify the manned vehicle based at least on one or more of: the license plate number, a VIN number, a make, a model, or a placard associated with the manned vehicle. 5. The platform of claim 1, wherein the parking location comprises a GPS coordinate, a unique parking spot identifier, an area defined by three or more coordinates, or any combination thereof. 6. The platform of claim 2, wherein the parking regulation comprises a meter requirement, a time period, a placard or permit requirement, or any combination thereof. 7. The platform of claim 2, wherein the parking violation comprises parking in an illegal spot, parking in an expired spot, an expired parking meter, an expired parking term, a missing placard or permit, or any combination thereof. 8. The platform of claim 3, wherein at least one of the parking assessment algorithm and the violation assessment algorithm comprises a machine learning algorithm, a rule-based algorithm, or both. 9. The platform of claim 3, wherein the second sensed data corresponding to the identification of the manned vehicle comprises a license plate number, a VIN number, a make, a model, a placard, or any combination thereof. 10. The platform of claim 3, wherein the vehicle identification algorithm comprises a machine learning algorithm, an optical character recognition algorithm, a rule-based algorithm, or any combination thereof. 11. The platform of claim 1, wherein at least one of the vehicles comprise the processor and the application. 12. The platform of claim 1, wherein each of the vehicles comprise the processor and the application. 13. The platform of claim 1, further comprising a remote server in communication with one or more of the vehicles, wherein the remote server comprises the processor and the application. 14. The platform of claim 3, further comprising a data storage receiving and storing at least one of the first sensed data, the second sensed data, the parking location, the parking status, the identity of the manned vehicle, and the parking violation. 15. The platform of claim 2, further comprising a user interface allowing an administrative user to configure the database comprising parking locations and parking regulations. 16. The platform of claim 15, wherein the user interface is a graphic user interface or an application programming interface. 17. The platform of claim 3, further comprising a user interface allowing an administrative user to configure the parking assessment algorithm, the violation assessment algorithm, the vehicle identification algorithm, or any combination thereof. 18. The platform of claim 17, wherein the user interface allows the administrative user to configure the parking assessment algorithm, violation assessment algorithm, or vehicle identification algorithm by uploading algorithm rules, algorithm criteria, or both. 19. The platform of claim 3, further comprising an alerting module transmitting a notification to an enforcement agent, wherein the notification comprises at least one of: the parking location, the at least one parking regulation associated with the parking location, the first sensed data, the second sensed data, and the identification of the manned vehicle associated with the parking location. 20. A platform for detecting a traffic violation by a manned vehicle at a roadway location, the platform comprising: a) a plurality of autonomous or semi-autonomous land vehicles, each autonomous or semi-autonomous land vehicle comprising: (i) one or more sensors configured to collect a first sensed data corresponding to the roadway location, a second sensed data corresponding to a behavior associated with the manned vehicle, and a third sensed data corresponding to an identification of the manned vehicle; and (ii) a communication device; and b) the platform further comprising a processor configured to provide an application comprising: (i) a database comprising a plurality of roadway locations, each roadway location associated with at least one roadway regulation; (ii) a communication module receiving at least one of the first sensed data, the second sensed data, and the third sensed data via the communication device; (iii) a driving behavior assessment module applying a manned driving assessment algorithm to determine a driving behavior of the manned vehicle associated with the roadway location, based at least on the first sensed data, the second sensed data, or both; (iv) a traffic violation detection module applying a traffic violation assessment algorithm to detect a traffic violation associated with the manned vehicle and the roadway location, based at least on one or more of the driving behavior, the roadway location, the roadway regulation, the first sensed data, the second sensed data, and the third sensed data; and (v) an alerting module transmitting a notification to an enforcement agent, wherein the notification comprises at least one of the driving violation, the driving behavior, the roadway location, the roadway regulation, the first sensed data, the second sensed data, and the third sensed data. 21. The platform of claim 20, wherein the traffic violation comprises an expired license plate, a license plate wanted by law enforcement, an illegal turn violation, a speeding violation, a red light violation, a stop sign violation, a yield sign violation, a signaling violation, a passing violation, a U-turn violation, a median violation, or any combination thereof. 22. The platform of claim 20, wherein the roadway regulation comprises a speed regulation, a stoplight regulation, a yield regulation, a passing regulation, a U-turn regulation, a median regulation, or any combination thereof. 23. The platform of claim 20, wherein at least one of the manned driving assessment algorithm and the traffic violation assessment algorithm comprise a machine learning algorithm, a rule-based algorithm, or both. 24. The platform of claim 20, wherein the first sensed data comprises a GPS coordinate, a unique roadway identifier, an area defined by three or more coordinates, or any combination thereof. 25. The platform of claim 20, wherein the roadway location comprises a street address, a street name, a cross street, a parking lot, a highway, a street, a boulevard, a freeway, a tollway, a bridge, or a tunnel. 26. The platform of claim 20, wherein the second sensed data corresponding to a behavior associated with the manned vehicle comprises a vehicle speed, a vehicle acceleration, a vehicle deceleration, a vehicle lane change, a vehicle turn, or any combination thereof. 27. The platform of claim 20, wherein the third sensed data corresponding to the identification of the manned vehicle comprises a license plate number, a VIN number, a make, a model, a placard, or any combination thereof. 28. The platform of claim 20, wherein at least one of the autonomous or semi-autonomous land vehicles comprise the processor and the application. 29. The platform of claim 28, wherein each of the autonomous or semi-autonomous land vehicles comprise the processor and the application. 30. The platform of claim 20, further comprising a remote server in communication with one or more of the autonomous or semi-autonomous land vehicles and wherein the remote server comprises the processor and the application. 31. The platform of claim 20, further comprising a data storage receiving and storing at least one of the first sensed data, the second sensed data, the third sensed data, the roadway location, the driving behavior, and the traffic violation. 32. The platform of claim 20, further comprising a user interface allowing an administrative user to configure the database comprising roadway locations roadway regulations. 33. The platform of claim 32, wherein the user interface is a graphic user interface or an application programming interface. 34. The platform of claim 20, further comprising a user interface allowing an administrative user to configure the manned driving assessment algorithm, the traffic violation assessment algorithm, or both. 35. The platform of claim 34, wherein the user interface allows the administrative user to configure the manned driving assessment algorithm or the traffic violation assessment algorithm by uploading algorithm rules, algorithm criteria, or both. 36. The platform of claim 35, wherein the user interface is a graphic user interface or an application programming interface. 37. A platform for monitoring security of a physical location by an autonomous or semi-autonomous land vehicle, the platform comprising: a) a plurality of autonomous or semi-autonomous land vehicles, each autonomous or semi-autonomous land vehicle comprising: (i) a sensor configured to record a media corresponding to the premises; (ii) an autonomous or semi-autonomous land propulsion system; and (iii) a communication device; b) a server processor configured to provide a server application comprising: (i) a server communication module receiving a monitoring request generated by a user, wherein the monitoring request comprises a monitoring location and a monitoring time; (ii) a dispatch module instructing the autonomous or semi-autonomous land propulsion system of at least one of the semi-autonomous land vehicles based on the monitoring request; and c) a client processor configured to provide a client application comprising: (i) a request module receiving the monitoring request from the user; (ii) a client communication module receiving the media via the communication device, via the server communication module, or both, and transmitting the monitoring request to the server communication module; and (iii) a display module displaying the media to the user. 38. The platform of claim 37, wherein the media comprises video, an image, a sound, a measurement, or any combination thereof. 39. The platform of claim 37, wherein each autonomous or semi-autonomous land vehicle further comprises a filter processor. 40. The platform of claim 39, wherein at least one of the server processor, the client processor, and the filter processor further comprises a filter database comprising a plurality of media filters. 41. The platform of claim 40, wherein the plurality of media filters comprises a motion detection filter, a human detection filter, a proximity detection filter, an encroachment detection filter, a loitering detection filter, or any combination thereof. 42. The platform of claim 40, wherein the monitoring request further comprises one or more of the media filters. 43. The platform of claim 42, wherein the server application further comprises an assessment module applying a filtering algorithm to the media based on the one or more media filters, to form a filtered media. 44. The platform of claim 43, wherein the display module displays the filtered media to the user. 45. The platform of claim 37, wherein the monitoring location comprises a residential building, a commercial building, a parking lot, a park, a sports arena, or any combination thereof. 46. The platform of claim 37, wherein the monitoring time comprises a time period, a time interval, a start time, an end time, or any combination thereof. 47. The platform of claim 46, wherein the monitoring time is a recurring time. 48. The platform of claim 37, wherein the client application comprises a web application, a mobile application, or any combination thereof.
Provided herein are platforms for determining a real-time human behavior analysis of an unmanned vehicle by a plurality of autonomous or semi-autonomous land vehicles through infrastructure recognition and assessment. The platforms determine a real-time parking status for a plurality of parking locations, a platform for detecting a traffic violation by a manned vehicle at a roadway location, and a platform for monitoring security of a physical location.1. A platform for determining a real-time parking status for a plurality of parking locations, the platform comprising: a) a plurality of autonomous or semi-autonomous land vehicles, each autonomous or semi-autonomous land vehicle comprising: (i) one or more sensors configured to collect a first sensed data corresponding to a parking location; and (ii) a communication device; and b) the platform further comprising a processor configured to provide an application comprising: (i) a database comprising the plurality of parking locations; (ii) a communication module receiving the first sensed data via the communication device; and (iii) a parking spot recognition module (1) applying a parking assessment algorithm to determine the real-time parking status of the parking location based at least on the first sensed data, and (2) transmitting the parking status to the database. 2. The platform of claim 1, further configured to detect a parking violation, wherein: a) the parking location is associated with at least one parking regulation; and b) the application further comprises a violation detection module applying a violation assessment algorithm to detect the parking violation based at least on the parking location, the at least one parking regulation associated with the parking location, and one or more of: the first sensed data, and the real-time parking status of the parking location. 3. The platform of claim 2, further configured to identify the manned vehicle, wherein: a) the one or more sensors are further configured to collect a second sensed data corresponding to an identification of a manned vehicle associated with the parking location; b) the communication module further receives the second sensed data via the communication device; and c) the application further comprises a vehicle identification module applying a vehicle identification algorithm to identify the manned vehicle based at least on the second sensed data. 4. The platform of claim 3, wherein the processor configured to provide an application further comprises a vehicle identity identification module applying a vehicle identification algorithm to identify the manned vehicle based at least on one or more of: the license plate number, a VIN number, a make, a model, or a placard associated with the manned vehicle. 5. The platform of claim 1, wherein the parking location comprises a GPS coordinate, a unique parking spot identifier, an area defined by three or more coordinates, or any combination thereof. 6. The platform of claim 2, wherein the parking regulation comprises a meter requirement, a time period, a placard or permit requirement, or any combination thereof. 7. The platform of claim 2, wherein the parking violation comprises parking in an illegal spot, parking in an expired spot, an expired parking meter, an expired parking term, a missing placard or permit, or any combination thereof. 8. The platform of claim 3, wherein at least one of the parking assessment algorithm and the violation assessment algorithm comprises a machine learning algorithm, a rule-based algorithm, or both. 9. The platform of claim 3, wherein the second sensed data corresponding to the identification of the manned vehicle comprises a license plate number, a VIN number, a make, a model, a placard, or any combination thereof. 10. The platform of claim 3, wherein the vehicle identification algorithm comprises a machine learning algorithm, an optical character recognition algorithm, a rule-based algorithm, or any combination thereof. 11. The platform of claim 1, wherein at least one of the vehicles comprise the processor and the application. 12. The platform of claim 1, wherein each of the vehicles comprise the processor and the application. 13. The platform of claim 1, further comprising a remote server in communication with one or more of the vehicles, wherein the remote server comprises the processor and the application. 14. The platform of claim 3, further comprising a data storage receiving and storing at least one of the first sensed data, the second sensed data, the parking location, the parking status, the identity of the manned vehicle, and the parking violation. 15. The platform of claim 2, further comprising a user interface allowing an administrative user to configure the database comprising parking locations and parking regulations. 16. The platform of claim 15, wherein the user interface is a graphic user interface or an application programming interface. 17. The platform of claim 3, further comprising a user interface allowing an administrative user to configure the parking assessment algorithm, the violation assessment algorithm, the vehicle identification algorithm, or any combination thereof. 18. The platform of claim 17, wherein the user interface allows the administrative user to configure the parking assessment algorithm, violation assessment algorithm, or vehicle identification algorithm by uploading algorithm rules, algorithm criteria, or both. 19. The platform of claim 3, further comprising an alerting module transmitting a notification to an enforcement agent, wherein the notification comprises at least one of: the parking location, the at least one parking regulation associated with the parking location, the first sensed data, the second sensed data, and the identification of the manned vehicle associated with the parking location. 20. A platform for detecting a traffic violation by a manned vehicle at a roadway location, the platform comprising: a) a plurality of autonomous or semi-autonomous land vehicles, each autonomous or semi-autonomous land vehicle comprising: (i) one or more sensors configured to collect a first sensed data corresponding to the roadway location, a second sensed data corresponding to a behavior associated with the manned vehicle, and a third sensed data corresponding to an identification of the manned vehicle; and (ii) a communication device; and b) the platform further comprising a processor configured to provide an application comprising: (i) a database comprising a plurality of roadway locations, each roadway location associated with at least one roadway regulation; (ii) a communication module receiving at least one of the first sensed data, the second sensed data, and the third sensed data via the communication device; (iii) a driving behavior assessment module applying a manned driving assessment algorithm to determine a driving behavior of the manned vehicle associated with the roadway location, based at least on the first sensed data, the second sensed data, or both; (iv) a traffic violation detection module applying a traffic violation assessment algorithm to detect a traffic violation associated with the manned vehicle and the roadway location, based at least on one or more of the driving behavior, the roadway location, the roadway regulation, the first sensed data, the second sensed data, and the third sensed data; and (v) an alerting module transmitting a notification to an enforcement agent, wherein the notification comprises at least one of the driving violation, the driving behavior, the roadway location, the roadway regulation, the first sensed data, the second sensed data, and the third sensed data. 21. The platform of claim 20, wherein the traffic violation comprises an expired license plate, a license plate wanted by law enforcement, an illegal turn violation, a speeding violation, a red light violation, a stop sign violation, a yield sign violation, a signaling violation, a passing violation, a U-turn violation, a median violation, or any combination thereof. 22. The platform of claim 20, wherein the roadway regulation comprises a speed regulation, a stoplight regulation, a yield regulation, a passing regulation, a U-turn regulation, a median regulation, or any combination thereof. 23. The platform of claim 20, wherein at least one of the manned driving assessment algorithm and the traffic violation assessment algorithm comprise a machine learning algorithm, a rule-based algorithm, or both. 24. The platform of claim 20, wherein the first sensed data comprises a GPS coordinate, a unique roadway identifier, an area defined by three or more coordinates, or any combination thereof. 25. The platform of claim 20, wherein the roadway location comprises a street address, a street name, a cross street, a parking lot, a highway, a street, a boulevard, a freeway, a tollway, a bridge, or a tunnel. 26. The platform of claim 20, wherein the second sensed data corresponding to a behavior associated with the manned vehicle comprises a vehicle speed, a vehicle acceleration, a vehicle deceleration, a vehicle lane change, a vehicle turn, or any combination thereof. 27. The platform of claim 20, wherein the third sensed data corresponding to the identification of the manned vehicle comprises a license plate number, a VIN number, a make, a model, a placard, or any combination thereof. 28. The platform of claim 20, wherein at least one of the autonomous or semi-autonomous land vehicles comprise the processor and the application. 29. The platform of claim 28, wherein each of the autonomous or semi-autonomous land vehicles comprise the processor and the application. 30. The platform of claim 20, further comprising a remote server in communication with one or more of the autonomous or semi-autonomous land vehicles and wherein the remote server comprises the processor and the application. 31. The platform of claim 20, further comprising a data storage receiving and storing at least one of the first sensed data, the second sensed data, the third sensed data, the roadway location, the driving behavior, and the traffic violation. 32. The platform of claim 20, further comprising a user interface allowing an administrative user to configure the database comprising roadway locations roadway regulations. 33. The platform of claim 32, wherein the user interface is a graphic user interface or an application programming interface. 34. The platform of claim 20, further comprising a user interface allowing an administrative user to configure the manned driving assessment algorithm, the traffic violation assessment algorithm, or both. 35. The platform of claim 34, wherein the user interface allows the administrative user to configure the manned driving assessment algorithm or the traffic violation assessment algorithm by uploading algorithm rules, algorithm criteria, or both. 36. The platform of claim 35, wherein the user interface is a graphic user interface or an application programming interface. 37. A platform for monitoring security of a physical location by an autonomous or semi-autonomous land vehicle, the platform comprising: a) a plurality of autonomous or semi-autonomous land vehicles, each autonomous or semi-autonomous land vehicle comprising: (i) a sensor configured to record a media corresponding to the premises; (ii) an autonomous or semi-autonomous land propulsion system; and (iii) a communication device; b) a server processor configured to provide a server application comprising: (i) a server communication module receiving a monitoring request generated by a user, wherein the monitoring request comprises a monitoring location and a monitoring time; (ii) a dispatch module instructing the autonomous or semi-autonomous land propulsion system of at least one of the semi-autonomous land vehicles based on the monitoring request; and c) a client processor configured to provide a client application comprising: (i) a request module receiving the monitoring request from the user; (ii) a client communication module receiving the media via the communication device, via the server communication module, or both, and transmitting the monitoring request to the server communication module; and (iii) a display module displaying the media to the user. 38. The platform of claim 37, wherein the media comprises video, an image, a sound, a measurement, or any combination thereof. 39. The platform of claim 37, wherein each autonomous or semi-autonomous land vehicle further comprises a filter processor. 40. The platform of claim 39, wherein at least one of the server processor, the client processor, and the filter processor further comprises a filter database comprising a plurality of media filters. 41. The platform of claim 40, wherein the plurality of media filters comprises a motion detection filter, a human detection filter, a proximity detection filter, an encroachment detection filter, a loitering detection filter, or any combination thereof. 42. The platform of claim 40, wherein the monitoring request further comprises one or more of the media filters. 43. The platform of claim 42, wherein the server application further comprises an assessment module applying a filtering algorithm to the media based on the one or more media filters, to form a filtered media. 44. The platform of claim 43, wherein the display module displays the filtered media to the user. 45. The platform of claim 37, wherein the monitoring location comprises a residential building, a commercial building, a parking lot, a park, a sports arena, or any combination thereof. 46. The platform of claim 37, wherein the monitoring time comprises a time period, a time interval, a start time, an end time, or any combination thereof. 47. The platform of claim 46, wherein the monitoring time is a recurring time. 48. The platform of claim 37, wherein the client application comprises a web application, a mobile application, or any combination thereof.
2,600
10,804
10,804
16,033,852
2,649
Systems and methods for managing communication strategies between implanted medical devices. Methods include temporal optimization relative to one or more identified conditions in the body. A selected characteristic, such as a signal representative or linked to a biological function, is assessed to determine its likely impact on communication capabilities, and one or more communication strategies may be developed to optimize intra-body communication.
1. A first medical device comprising a communication module for communicating with a second medical device and a controller operatively coupled to the communication module, the controller configured to optimize communication by: determining a first condition of a first characteristic is present; and modifying communication with the second implantable medical device based on the determination; wherein at least one of the first and second medical devices is implantable; and wherein the first characteristic is a cyclic biological phenomenon and the first condition is the occurrence of a recurring event in the cyclic biological phenomenon. 2. The first medical device of claim 1, wherein the communication module is configured for communication by conducted communication. 3. The first medical device of claim 1, wherein the first medical device is configured as an implantable medical device. 4. The first medical device of claim 1, wherein the controller is configured to further optimize communication by sequentially modifying communication with the second implantable device based on the determination in a plurality of communication attempts. 5. The first medical device of claim 1, wherein the first characteristic is a detected status of a cardiac cycle, and the first condition is the occurrence of one of a cardiac R-wave or a cardiac T-wave. 6. The first medical device of claim 1, wherein the first characteristic is a detected status of a cardiac cycle, and the first condition is the occurrence of a pacing pulse. 7. The first medical device of claim 1, wherein the cyclic biological phenomenon is a repetitive patient movement. 8. The first medical device of claim 1, wherein the first characteristic is a detected status of a respiration cycle, and the first condition is the occurrence of one of an exhale or an inhale. 9. The first medical device of claim 1, wherein the first characteristic is a detected a transthoracic impedance, and the first condition is the occurrence of one of a maximum impedance or a minimum impedance. 10. An implantable medical device system comprising a first medical device as recited in claim 1 and a second implantable medical device configured for communication with the first medical device, wherein the first medical device is an intracardiac pacing device, and the second implantable medical device is a subcutaneous defibrillator. 11. An implantable medical device system comprising a first medical device as recited in claim 1, and a second implantable medical device configured for communication with the first medical device, wherein the first medical device is a subcutaneous defibrillator, and the second implantable medical device is an intracardiac pacing device. 12. The first implantable medical device of claim 1 further comprising a plurality of electrodes coupled to sensing circuitry adapted to sense the first characteristic and detect the first condition. 13. A first medical device comprising a communication module for communicating with a second medical device and a controller operatively coupled to the communication module, the controller configured to communicate with the second medical device by: identifying a present need for communication; attempt a first communication using first communication strategy; determine whether the first communication was unsuccessful; in response to determining that the first communication was unsuccessful, determine whether the present need for communication relates to a critical issue and: if the present need for communication relates to a critical issue, adopt a first communication retry strategy; and if the present need for communication does not relate to a critical issue, adopt a second communication retry strategy. 14. The first medical device of claim 13 wherein the first communication retry strategy comprises omitting a retry and proceeding to deliver therapy to the patient, and the second communication retry strategy comprises retrying communication one or more times. 15. The first medical device of claim 14 wherein the operational circuitry is configured to modify a communication setting for use in the second communication retry strategy by: detecting a first condition of a first characteristic of the patient; and selecting the modified communication setting in light of the first condition of the first characteristic. 16. The first medical device of claim 13 wherein the first communication retry strategy comprises: detecting a first condition of a first characteristic of the patient; selecting a modified communication setting in light of the first condition of the first characteristic; and retrying communication using the modified communication setting. 17. The first medical device of claim 13 wherein at least one of the first and second communication retry strategies comprises identifying a successful communication strategy and recording one or more parameters of the successful communication strategy for use in subsequent communication by the first medical device to the second medical device. 18. The first medical device of claim 13 wherein the second communication retry strategy comprises adjusting a communication setting in light of a first condition of a first characteristic of the patient, the first characteristic being a cyclic biological phenomenon of the patient. 19. A first medical device comprising a communication module for communicating with a second medical device and a controller operatively coupled to the communication module, the controller configured to optimize communication by: identifying occurrence of a first cardiac R-wave; and attempting communication at a set time selected to be after a T-wave subsequent to the first cardiac R-wave and prior to a P-wave of a second cardiac R-wave that follows the first cardiac R-wave; wherein the first medical device further comprises a plurality of electrodes for receiving cardiac electrical signals and sensing circuitry adapted to detect occurrence of the first cardiac R-wave. 20. The first medical device of claim 19 wherein the sensing circuitry is adapted to detect divergence of the received cardiac electrical signal from baseline, and the set time is selected to occur while the received cardiac electrical signal is near baseline.
Systems and methods for managing communication strategies between implanted medical devices. Methods include temporal optimization relative to one or more identified conditions in the body. A selected characteristic, such as a signal representative or linked to a biological function, is assessed to determine its likely impact on communication capabilities, and one or more communication strategies may be developed to optimize intra-body communication.1. A first medical device comprising a communication module for communicating with a second medical device and a controller operatively coupled to the communication module, the controller configured to optimize communication by: determining a first condition of a first characteristic is present; and modifying communication with the second implantable medical device based on the determination; wherein at least one of the first and second medical devices is implantable; and wherein the first characteristic is a cyclic biological phenomenon and the first condition is the occurrence of a recurring event in the cyclic biological phenomenon. 2. The first medical device of claim 1, wherein the communication module is configured for communication by conducted communication. 3. The first medical device of claim 1, wherein the first medical device is configured as an implantable medical device. 4. The first medical device of claim 1, wherein the controller is configured to further optimize communication by sequentially modifying communication with the second implantable device based on the determination in a plurality of communication attempts. 5. The first medical device of claim 1, wherein the first characteristic is a detected status of a cardiac cycle, and the first condition is the occurrence of one of a cardiac R-wave or a cardiac T-wave. 6. The first medical device of claim 1, wherein the first characteristic is a detected status of a cardiac cycle, and the first condition is the occurrence of a pacing pulse. 7. The first medical device of claim 1, wherein the cyclic biological phenomenon is a repetitive patient movement. 8. The first medical device of claim 1, wherein the first characteristic is a detected status of a respiration cycle, and the first condition is the occurrence of one of an exhale or an inhale. 9. The first medical device of claim 1, wherein the first characteristic is a detected a transthoracic impedance, and the first condition is the occurrence of one of a maximum impedance or a minimum impedance. 10. An implantable medical device system comprising a first medical device as recited in claim 1 and a second implantable medical device configured for communication with the first medical device, wherein the first medical device is an intracardiac pacing device, and the second implantable medical device is a subcutaneous defibrillator. 11. An implantable medical device system comprising a first medical device as recited in claim 1, and a second implantable medical device configured for communication with the first medical device, wherein the first medical device is a subcutaneous defibrillator, and the second implantable medical device is an intracardiac pacing device. 12. The first implantable medical device of claim 1 further comprising a plurality of electrodes coupled to sensing circuitry adapted to sense the first characteristic and detect the first condition. 13. A first medical device comprising a communication module for communicating with a second medical device and a controller operatively coupled to the communication module, the controller configured to communicate with the second medical device by: identifying a present need for communication; attempt a first communication using first communication strategy; determine whether the first communication was unsuccessful; in response to determining that the first communication was unsuccessful, determine whether the present need for communication relates to a critical issue and: if the present need for communication relates to a critical issue, adopt a first communication retry strategy; and if the present need for communication does not relate to a critical issue, adopt a second communication retry strategy. 14. The first medical device of claim 13 wherein the first communication retry strategy comprises omitting a retry and proceeding to deliver therapy to the patient, and the second communication retry strategy comprises retrying communication one or more times. 15. The first medical device of claim 14 wherein the operational circuitry is configured to modify a communication setting for use in the second communication retry strategy by: detecting a first condition of a first characteristic of the patient; and selecting the modified communication setting in light of the first condition of the first characteristic. 16. The first medical device of claim 13 wherein the first communication retry strategy comprises: detecting a first condition of a first characteristic of the patient; selecting a modified communication setting in light of the first condition of the first characteristic; and retrying communication using the modified communication setting. 17. The first medical device of claim 13 wherein at least one of the first and second communication retry strategies comprises identifying a successful communication strategy and recording one or more parameters of the successful communication strategy for use in subsequent communication by the first medical device to the second medical device. 18. The first medical device of claim 13 wherein the second communication retry strategy comprises adjusting a communication setting in light of a first condition of a first characteristic of the patient, the first characteristic being a cyclic biological phenomenon of the patient. 19. A first medical device comprising a communication module for communicating with a second medical device and a controller operatively coupled to the communication module, the controller configured to optimize communication by: identifying occurrence of a first cardiac R-wave; and attempting communication at a set time selected to be after a T-wave subsequent to the first cardiac R-wave and prior to a P-wave of a second cardiac R-wave that follows the first cardiac R-wave; wherein the first medical device further comprises a plurality of electrodes for receiving cardiac electrical signals and sensing circuitry adapted to detect occurrence of the first cardiac R-wave. 20. The first medical device of claim 19 wherein the sensing circuitry is adapted to detect divergence of the received cardiac electrical signal from baseline, and the set time is selected to occur while the received cardiac electrical signal is near baseline.
2,600
10,805
10,805
14,821,726
2,689
A lock, such as a padlock or a door lock, having radio frequency identification (RFID) and/or Bluetooth capabilities is disclosed. The control system in the lock may obtain identifying information from a user device presented close to the lock, and operates an actuator to unlock the lock based on the identifying information. The lock may include both an RFID reader and a Bluetooth system in a single device, and may automatically lock and unlock the lock by detecting a presence or an absence of a user device near the lock. At least a portion of a front face of the lock may be made of non-metallic material. The lock may include an indicator for indicating a power-on state, a Bluetooth connection status, a locked or unlocked status, and/or a low-battery state.
1. A lock comprising: a housing; a locking mechanism to lock the lock; and a lock control system including a radio frequency identification (RFID) reader and a Bluetooth system, wherein the lock control system is configured to actuate the locking mechanism based on identifying information obtained via the RFID reader or the Bluetooth system. 2. The lock of claim 1, wherein at least a portion of a front face of the housing in front of an RFID antenna and a Bluetooth antenna is made of non-metallic material. 3. The lock of claim 1, wherein a non-metallic isolation is provided between a wrap-around body and a bottom plate of the housing. 4. The lock of claim 1, wherein an RFID circuitry and a Bluetooth circuitry are printed on a printed circuit board (PCB), and a ground plane provided for the RFID circuitry and the Bluetooth circuitry has a slot where an RFID antenna and a Bluetooth antenna are located. 5. The lock of claim 1, wherein a cutout is provided on a front plate of the housing, leaving an RFID antenna and a Bluetooth antenna exposed. 6. The lock of claim 1, wherein the lock is a padlock including a shackle, wherein the housing includes a dowel pin channel for accommodating a dowel pin attached around a bottom of the shackle, and wherein the dowel pin channel is extended vertically with a horizontal extension at a top of the dowel pin channel such that the shackle can move vertically freely but cannot swivel until the shackle is fully extended out of the housing. 7. The lock of claim 1, further comprising a data connector for connecting to an external entity, wherein the data connector is accessed through a channel formed in the housing. 8. The lock of claim 1, wherein the locking mechanism includes a servo motor and a metal pin drivable by the servo motor, and wherein the metal pin is inserted into a notch formed into a shackle when in a locked state, and retracted from the notch when in an unlocked state. 9. The lock of claim 1, further comprising an indicator for indicating at least one of a power-on state, a Bluetooth connection status, a locked or unlocked status, or a low-battery state. 10. The lock of claim 1, wherein the lock is a door lock including an external unit and an internal unit installed outside and inside a door, respectively, wherein the external unit includes an external handle and the internal unit includes an internal handle, wherein the locking mechanism includes a servo motor and a metal pin drivable by the servo motor, and wherein the metal pin is extended into a locking hole formed into the external handle in a locked state to prevent the external handle from rotating, and the metal pin is extracted from the locking hole in an unlocked state. 11. The lock of claim 10, wherein the external handle and/or the internal handle are a two-part assembly including a hand-gripping part and a plate rotatably attached to the external unit or the internal unit, and wherein the hand-gripping part is attached to the plate with two screws such that the screws are configured to break before any other components of the lock break when an excessive force is applied to the hand-gripping part. 12. The lock of claim 10, further comprising: a decoupling means configured to allow the internal handle to turn and open the door without operating the external handle. 13. The lock of claim 10, further comprising: an automatic back unlock means configured to detect operation of the internal handle to unlock the door and leave the door unlocked. 14. The lock of claim 10, further comprising: a door lock turning prevention means for allowing the external handle to rotate in a specific direction. 15. The lock of claim 1, wherein the lock stays in a sleep state until a button on a front plate of the lock is pressed by a user and returns to the sleep state after use or a predetermined period of inactivity. 16. The lock of claim 1, further comprising: a memory for storing identifying information of users that opened the lock, and time and date information that the lock was opened. 17. The lock of claim 1, wherein the lock control system is configured to automatically unlock and lock the lock by detecting a presence or an absence of an RFID device or a Bluetooth device within a range of the lock. 18. The lock of claim 1, wherein the lock control system is configured to maintain a list of Bluetooth-enabled devices that it has previously connected to, and establish a connection with a Bluetooth-enabled device based on the list. 19. The lock of claim 1, further comprising: a real time clock (RTC) configured to keep time and date information, wherein the lock control system is configured to keep records of identification of a device used to open the lock and time and date information when the lock is opened, and provide the records to a user or owner of the lock. 20. The lock of claim 19, wherein the lock control system is configured to provide time-based access to a location secured by the lock. 21. The lock of claim 1, further comprising: a Global Positioning System (GPS) module for obtaining geolocation information of the lock, wherein the lock control system is configured to actuate the locking mechanism based on the geolocation information. 22. The lock of claim 1, wherein a master access code is programmed in a lock firmware. 23. The lock of claim 1, wherein factory default settings on the lock are recovered by pressing a program button or by sending a predetermined credential after scanning an authorized device or receiving an authorized command. 24. The lock of claim 1, wherein authorization credentials of the lock are customized by a user by pressing a program button or by sending a predetermined credential after scanning an authorized device or receiving an authorized command. 25. The lock of claim 1, wherein the lock is configured to read several near field communication (NFC) communications simultaneously. 26. The lock of claim 1, further comprising: a sensor configured to detect when the lock is closed and send a signal to the lock control system indicating that the lock is closed. 27. The lock of claim 1, further comprising: a bolt that is spring-loaded in an open state and released in response to a signal sent from the lock control system to lock the lock.
A lock, such as a padlock or a door lock, having radio frequency identification (RFID) and/or Bluetooth capabilities is disclosed. The control system in the lock may obtain identifying information from a user device presented close to the lock, and operates an actuator to unlock the lock based on the identifying information. The lock may include both an RFID reader and a Bluetooth system in a single device, and may automatically lock and unlock the lock by detecting a presence or an absence of a user device near the lock. At least a portion of a front face of the lock may be made of non-metallic material. The lock may include an indicator for indicating a power-on state, a Bluetooth connection status, a locked or unlocked status, and/or a low-battery state.1. A lock comprising: a housing; a locking mechanism to lock the lock; and a lock control system including a radio frequency identification (RFID) reader and a Bluetooth system, wherein the lock control system is configured to actuate the locking mechanism based on identifying information obtained via the RFID reader or the Bluetooth system. 2. The lock of claim 1, wherein at least a portion of a front face of the housing in front of an RFID antenna and a Bluetooth antenna is made of non-metallic material. 3. The lock of claim 1, wherein a non-metallic isolation is provided between a wrap-around body and a bottom plate of the housing. 4. The lock of claim 1, wherein an RFID circuitry and a Bluetooth circuitry are printed on a printed circuit board (PCB), and a ground plane provided for the RFID circuitry and the Bluetooth circuitry has a slot where an RFID antenna and a Bluetooth antenna are located. 5. The lock of claim 1, wherein a cutout is provided on a front plate of the housing, leaving an RFID antenna and a Bluetooth antenna exposed. 6. The lock of claim 1, wherein the lock is a padlock including a shackle, wherein the housing includes a dowel pin channel for accommodating a dowel pin attached around a bottom of the shackle, and wherein the dowel pin channel is extended vertically with a horizontal extension at a top of the dowel pin channel such that the shackle can move vertically freely but cannot swivel until the shackle is fully extended out of the housing. 7. The lock of claim 1, further comprising a data connector for connecting to an external entity, wherein the data connector is accessed through a channel formed in the housing. 8. The lock of claim 1, wherein the locking mechanism includes a servo motor and a metal pin drivable by the servo motor, and wherein the metal pin is inserted into a notch formed into a shackle when in a locked state, and retracted from the notch when in an unlocked state. 9. The lock of claim 1, further comprising an indicator for indicating at least one of a power-on state, a Bluetooth connection status, a locked or unlocked status, or a low-battery state. 10. The lock of claim 1, wherein the lock is a door lock including an external unit and an internal unit installed outside and inside a door, respectively, wherein the external unit includes an external handle and the internal unit includes an internal handle, wherein the locking mechanism includes a servo motor and a metal pin drivable by the servo motor, and wherein the metal pin is extended into a locking hole formed into the external handle in a locked state to prevent the external handle from rotating, and the metal pin is extracted from the locking hole in an unlocked state. 11. The lock of claim 10, wherein the external handle and/or the internal handle are a two-part assembly including a hand-gripping part and a plate rotatably attached to the external unit or the internal unit, and wherein the hand-gripping part is attached to the plate with two screws such that the screws are configured to break before any other components of the lock break when an excessive force is applied to the hand-gripping part. 12. The lock of claim 10, further comprising: a decoupling means configured to allow the internal handle to turn and open the door without operating the external handle. 13. The lock of claim 10, further comprising: an automatic back unlock means configured to detect operation of the internal handle to unlock the door and leave the door unlocked. 14. The lock of claim 10, further comprising: a door lock turning prevention means for allowing the external handle to rotate in a specific direction. 15. The lock of claim 1, wherein the lock stays in a sleep state until a button on a front plate of the lock is pressed by a user and returns to the sleep state after use or a predetermined period of inactivity. 16. The lock of claim 1, further comprising: a memory for storing identifying information of users that opened the lock, and time and date information that the lock was opened. 17. The lock of claim 1, wherein the lock control system is configured to automatically unlock and lock the lock by detecting a presence or an absence of an RFID device or a Bluetooth device within a range of the lock. 18. The lock of claim 1, wherein the lock control system is configured to maintain a list of Bluetooth-enabled devices that it has previously connected to, and establish a connection with a Bluetooth-enabled device based on the list. 19. The lock of claim 1, further comprising: a real time clock (RTC) configured to keep time and date information, wherein the lock control system is configured to keep records of identification of a device used to open the lock and time and date information when the lock is opened, and provide the records to a user or owner of the lock. 20. The lock of claim 19, wherein the lock control system is configured to provide time-based access to a location secured by the lock. 21. The lock of claim 1, further comprising: a Global Positioning System (GPS) module for obtaining geolocation information of the lock, wherein the lock control system is configured to actuate the locking mechanism based on the geolocation information. 22. The lock of claim 1, wherein a master access code is programmed in a lock firmware. 23. The lock of claim 1, wherein factory default settings on the lock are recovered by pressing a program button or by sending a predetermined credential after scanning an authorized device or receiving an authorized command. 24. The lock of claim 1, wherein authorization credentials of the lock are customized by a user by pressing a program button or by sending a predetermined credential after scanning an authorized device or receiving an authorized command. 25. The lock of claim 1, wherein the lock is configured to read several near field communication (NFC) communications simultaneously. 26. The lock of claim 1, further comprising: a sensor configured to detect when the lock is closed and send a signal to the lock control system indicating that the lock is closed. 27. The lock of claim 1, further comprising: a bolt that is spring-loaded in an open state and released in response to a signal sent from the lock control system to lock the lock.
2,600
10,806
10,806
16,113,921
2,662
A system and method for identifying an image of an individual by touching a screen in a photo is disclosed herein. A feature vector of an individual is used to analyze other photos on a database or social networking website such as FACEBOOK® to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.
1. (canceled) 2. A method comprising: broadcasting a video over a network; receiving the video at a device that includes a touch screen display; displaying the video on the touch screen display; receiving user input via the touch screen display while the video is being broadcasted, the user input identifying a region on the display corresponding to a facial image to be analyzed; analyzing the facial image to obtain a feature vector; comparing the feature vector to a plurality of feature vectors stored in a database to obtain at least one comparison result that identifies a single individual across a plurality of images; and employing the comparison result to enable information about the single individual to be collected from a plurality of different storages; and presenting the collected information on the touch screen display. 3. The method of claim 2 wherein the video is a pre-recorded video. 4. The method of claim 3 wherein the pre-recorded video is a movie. 5. The method of claim 2 wherein the device is a mobile phone or tablet computer. 6. The method of claim 2 wherein the different storages include a plurality of Internet-accessible media content repositories. 7. The method of claim 6 wherein each of the plurality of Internet-accessible media content repositories is maintained by a different company. 8. A method comprising: providing pre-recorded video over a network; receiving the pre-recorded video at a device that includes a touch screen display; displaying the pre-recorded video on the touch screen display; receiving user input via the touch screen display while the pre-recorded video is being played, the user input identifying a region on the display corresponding to a facial image to be analyzed; analyzing the facial image to obtain a feature vector; comparing the feature vector to a plurality of feature vectors stored in a database to obtain at least one comparison result that identifies a single individual across a plurality of images; and employing the comparison result to enable information about the single individual to be collected from a plurality of different storages; and presenting the collected information on the touch screen display. 9. The method of claim 8 wherein the pre-recorded video is a movie. 10. The method of claim 8 wherein the device is a mobile phone or tablet computer. 11. The method of claim 8 wherein the different storages include a plurality of Internet-accessible media content repositories. 12. The method of claim 11 wherein each of the plurality of Internet-accessible media content repositories is maintained by a different company.
A system and method for identifying an image of an individual by touching a screen in a photo is disclosed herein. A feature vector of an individual is used to analyze other photos on a database or social networking website such as FACEBOOK® to determine if an image of the individual is present in a photo of the set of photos. Photos having an image of the individual are tagged preferably by listing a URL or URI for each of the photos in a database.1. (canceled) 2. A method comprising: broadcasting a video over a network; receiving the video at a device that includes a touch screen display; displaying the video on the touch screen display; receiving user input via the touch screen display while the video is being broadcasted, the user input identifying a region on the display corresponding to a facial image to be analyzed; analyzing the facial image to obtain a feature vector; comparing the feature vector to a plurality of feature vectors stored in a database to obtain at least one comparison result that identifies a single individual across a plurality of images; and employing the comparison result to enable information about the single individual to be collected from a plurality of different storages; and presenting the collected information on the touch screen display. 3. The method of claim 2 wherein the video is a pre-recorded video. 4. The method of claim 3 wherein the pre-recorded video is a movie. 5. The method of claim 2 wherein the device is a mobile phone or tablet computer. 6. The method of claim 2 wherein the different storages include a plurality of Internet-accessible media content repositories. 7. The method of claim 6 wherein each of the plurality of Internet-accessible media content repositories is maintained by a different company. 8. A method comprising: providing pre-recorded video over a network; receiving the pre-recorded video at a device that includes a touch screen display; displaying the pre-recorded video on the touch screen display; receiving user input via the touch screen display while the pre-recorded video is being played, the user input identifying a region on the display corresponding to a facial image to be analyzed; analyzing the facial image to obtain a feature vector; comparing the feature vector to a plurality of feature vectors stored in a database to obtain at least one comparison result that identifies a single individual across a plurality of images; and employing the comparison result to enable information about the single individual to be collected from a plurality of different storages; and presenting the collected information on the touch screen display. 9. The method of claim 8 wherein the pre-recorded video is a movie. 10. The method of claim 8 wherein the device is a mobile phone or tablet computer. 11. The method of claim 8 wherein the different storages include a plurality of Internet-accessible media content repositories. 12. The method of claim 11 wherein each of the plurality of Internet-accessible media content repositories is maintained by a different company.
2,600
10,807
10,807
15,075,025
2,643
The radio frequency environment surrounding a tower mounted, remote radio head (RRH) and its internal operation may be monitored without the need to climb the tower where the RRH is mounted. Many measurements, such as time/frequency measurements, may be made without climbing the tower.
1. A system for analyzing the operation of a radio frequency (RF) remote radio head comprising: a first receiving section operable to receive signals from a tower mounted, remote radio head (RRH), the signals comprising information related to signals from an RF environment at the RRH; a signal processing section operable to process the received signals in the time and frequency domains, and to identify one or more anomalies due to internal or external interfering signals from the RF environment at the RRH; and an interface for displaying a visualization of the one or more anomalies. 2. The system as in claim 1 wherein the received signals comprise one or more of the following types of data: RF interference, intermodulation distortion, spectral content, flicker noise, additive white Gaussian noise, colored noise, phase noise, carrier frequency, delay, RF signal strength. 3. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by estimating the spectral content of the signals in the RF environment at the RRH based on the received signals. 4. The system as in claim 3, wherein the signal processing section further comprises a periodic sequence estimator for estimating spectral content, the periodic sequence estimator represented by the relationship: P xx  ( ω ) = 1 N   ∑ n = 0 N - 1   x  ( n )  e - j   ω   n  2 5. The system as in claim 4, wherein signal processing section further comprises a weighted window power density estimator for reducing a variance of the estimate, where the weighted window power spectral density estimator is reoriented by the relationship: P xx ww  ( ω ) = ∑ k = - ( N - 1 ) N - 1  r xx  ( k )  ω  ( k )  e - j   ω   k 6. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by identifying one or more acceptable or interfering RF signals in the RF environment at the RRH from the received signals based on a time and frequency analysis. 7. The system as in claim 6, wherein the signal processing section is further operable to complete time and frequency estimates of a multicomponent RF signal using the following relationship: TFR  ( t , ω ) = ∑ k = 1 N   A  ( t , ω )  F  ( t , ω ) + XT 8. The system as in claim 7, wherein the signal processing section further comprises filter banks with transfer functions overlapped in frequency to avoid signal component artifacts. 9. The system as in claim 8, wherein a filter bank structure is represented by the relationship: C s ={s*h k |k=1 . . . N filters} 10. The system as in claim 9, wherein the signal processing section is further operable to complete a sub band analysis process to identify signal structures. 11. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by identifying one or more RF carriers, and each identified carrier's access scheme, in the RF environment at the RRH from the received signal vectors based on power and frequency estimates of each identified carrier. 12. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by estimating the spectral coherence of the signals in the RF environment at the RRH from the received signals. 13. The system as in claim 12, wherein the signal processing section is further operable to compute a frequency response due to interfering signals based on the relationship: C xy  ( f ) =  S xy _  ( f )  2 S xx _  ( f ) * S yy _  ( f ) 14. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by estimating the spectral density of the signals in the RF environment at the RRH from the received signals. 15. The system as in claim 1 further comprising a data storage section operable to store the received signal vectors, detected anomalies and the displayed visualizations. 16. The system as in claim 1 further comprising: an RRH, RF conversion and filter section for down converting over the air RF signals into digital signals; an RRH signal capture section for capturing the down converted digital signals and preprocessing the signals; and a second transceiving section at the RRH for transmitting the preprocessed signals from the RRH over the network to the first receiving section. 17. The system as in claim 16 wherein the first receiving section, signal processing section and the interface are part of a network element management system. 18. A method for analyzing the operation of a radio frequency (RF) remote radio head comprising: receiving signals from a tower mounted, remote radio head (RRH), the signals comprising information related to signals from an RF environment at the RRH; processing the received signals in the time and frequency domains to identify one or more anomalies due to internal or external interfering signals from the RF environment at the RRH; and displaying a visualization of the one or more anomalies. 19. The method as in claim 18 further comprising detecting an anomaly by estimating the spectral content of the signals in the RF environment at the RRH based on the received signals. 20. The method as in claim 19 further comprising detecting an anomaly by identifying one or more acceptable or interfering RF signals in the RF environment at the RRH from the received signals based on a time and frequency analysis.
The radio frequency environment surrounding a tower mounted, remote radio head (RRH) and its internal operation may be monitored without the need to climb the tower where the RRH is mounted. Many measurements, such as time/frequency measurements, may be made without climbing the tower.1. A system for analyzing the operation of a radio frequency (RF) remote radio head comprising: a first receiving section operable to receive signals from a tower mounted, remote radio head (RRH), the signals comprising information related to signals from an RF environment at the RRH; a signal processing section operable to process the received signals in the time and frequency domains, and to identify one or more anomalies due to internal or external interfering signals from the RF environment at the RRH; and an interface for displaying a visualization of the one or more anomalies. 2. The system as in claim 1 wherein the received signals comprise one or more of the following types of data: RF interference, intermodulation distortion, spectral content, flicker noise, additive white Gaussian noise, colored noise, phase noise, carrier frequency, delay, RF signal strength. 3. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by estimating the spectral content of the signals in the RF environment at the RRH based on the received signals. 4. The system as in claim 3, wherein the signal processing section further comprises a periodic sequence estimator for estimating spectral content, the periodic sequence estimator represented by the relationship: P xx  ( ω ) = 1 N   ∑ n = 0 N - 1   x  ( n )  e - j   ω   n  2 5. The system as in claim 4, wherein signal processing section further comprises a weighted window power density estimator for reducing a variance of the estimate, where the weighted window power spectral density estimator is reoriented by the relationship: P xx ww  ( ω ) = ∑ k = - ( N - 1 ) N - 1  r xx  ( k )  ω  ( k )  e - j   ω   k 6. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by identifying one or more acceptable or interfering RF signals in the RF environment at the RRH from the received signals based on a time and frequency analysis. 7. The system as in claim 6, wherein the signal processing section is further operable to complete time and frequency estimates of a multicomponent RF signal using the following relationship: TFR  ( t , ω ) = ∑ k = 1 N   A  ( t , ω )  F  ( t , ω ) + XT 8. The system as in claim 7, wherein the signal processing section further comprises filter banks with transfer functions overlapped in frequency to avoid signal component artifacts. 9. The system as in claim 8, wherein a filter bank structure is represented by the relationship: C s ={s*h k |k=1 . . . N filters} 10. The system as in claim 9, wherein the signal processing section is further operable to complete a sub band analysis process to identify signal structures. 11. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by identifying one or more RF carriers, and each identified carrier's access scheme, in the RF environment at the RRH from the received signal vectors based on power and frequency estimates of each identified carrier. 12. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by estimating the spectral coherence of the signals in the RF environment at the RRH from the received signals. 13. The system as in claim 12, wherein the signal processing section is further operable to compute a frequency response due to interfering signals based on the relationship: C xy  ( f ) =  S xy _  ( f )  2 S xx _  ( f ) * S yy _  ( f ) 14. The system as in claim 1 wherein the signal processing section is further operable to detect an anomaly by estimating the spectral density of the signals in the RF environment at the RRH from the received signals. 15. The system as in claim 1 further comprising a data storage section operable to store the received signal vectors, detected anomalies and the displayed visualizations. 16. The system as in claim 1 further comprising: an RRH, RF conversion and filter section for down converting over the air RF signals into digital signals; an RRH signal capture section for capturing the down converted digital signals and preprocessing the signals; and a second transceiving section at the RRH for transmitting the preprocessed signals from the RRH over the network to the first receiving section. 17. The system as in claim 16 wherein the first receiving section, signal processing section and the interface are part of a network element management system. 18. A method for analyzing the operation of a radio frequency (RF) remote radio head comprising: receiving signals from a tower mounted, remote radio head (RRH), the signals comprising information related to signals from an RF environment at the RRH; processing the received signals in the time and frequency domains to identify one or more anomalies due to internal or external interfering signals from the RF environment at the RRH; and displaying a visualization of the one or more anomalies. 19. The method as in claim 18 further comprising detecting an anomaly by estimating the spectral content of the signals in the RF environment at the RRH based on the received signals. 20. The method as in claim 19 further comprising detecting an anomaly by identifying one or more acceptable or interfering RF signals in the RF environment at the RRH from the received signals based on a time and frequency analysis.
2,600
10,808
10,808
15,887,869
2,668
Techniques of automatic image classification and modification in computing systems are disclosed herein. In one embodiment, a method includes scanning an inbox on email servers for emails containing image files. Upon detecting that an email in the inbox contains an image file, the method includes retrieving an identification photo of a user from a data store. The method also includes determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the retrieved identification photo. In response to determining that the image file in the email contains at least a partial image of the user, a metadata value is inserted into the image file indicating that the image file contains at least a partial image of the user before the image file is stored in the inbox on the one or more email servers.
1. A method for automatic image classification in a computing system having one or more email servers interconnected to client devices by a computer network, the method comprising: scanning an inbox on the one or more email servers for emails containing one or more image files; and upon detecting that an email in the inbox contains an image file, retrieving, via the computer network, an identification photo of a user from a data store containing entries of user identifications and corresponding identification photos; determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the retrieved identification photo of the user; and in response to determining that the image file in the email contains at least a partial image of the user, inserting a metadata value into the image file indicating that the image file contains at least a partial image of the user and storing the image file along with the inserted metadata value in the inbox on the one or more email servers. 2. The method of claim 1, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, generating or modifying an image gallery containing image files individually having at least a partial image of the user; and transmitting, via the computer network, the generated or modified image galley to another user having access to the inbox on the one or more email servers. 3. The method of claim 1, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, transmitting, via the computer network, a copy of the image file along with the inserted metadata value to a client device corresponding to another user having access to the inbox on the one or more email servers. 4. The method of claim 1, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, updating an image search index to indicate that the stored image file contains at least a partial image of the user. 5. The method of claim 1, further comprising: upon detecting that an email in the inbox contains an image file, retrieving, via the computer network, another image file known to contain an image of the user from the inbox on the one or more email servers; and wherein determining whether the image file in the email contains at least a partial image of the user includes determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the image of the user in the retrieved another image file. 6. The method of claim 1 wherein determining, via facial recognition, whether the image file in the email contains at least a partial image of the user includes: extracting one or more landmarks from the retrieved identification photo of the user's face; and determining whether the extracted one or more landmarks are present in the image file. 7. The method of claim 1 wherein determining, via facial recognition, whether the image file in the email contains at least a partial image of the user includes: extracting one or more landmarks from the retrieved identification photo of the user's face; determining whether the extracted one or more landmarks are present in the image file; and indicating that the image file contains at least a partial image of the user in response to determining that the extracted one or more landmarks are present in the image file. 8. The method of claim 1, further comprising in response to determining that the image file in the email contains at least a partial image of the user, repeating the retrieving and determining operations for additional identification photos of additional users. 9. The method of claim 1 wherein scanning the inbox includes: determining whether the email in the inbox contains an attachment; in response to determining that the email in the inbox contains an attached, determining whether the attachment has a file extension indicating an image file; and in response to determining that the attachment has a file extension indicating an image file, indicating that the email contains an image file. 10. A computing device configured to be interconnected to one or more client devices by a computer network, the computing system comprising: a processor; and a memory operatively coupled to the processor, the memory containing instructions executable by the processor to cause the computing device to: determining whether one or more image files are included as attachment to emails in an inbox corresponding to a user's email account; and upon determining that one or more image files are included as attachment to the emails in the inbox, determining, via facial recognition, whether the one or more image files individually contain at least a partial image of a person based on one or more facial landmarks of the person derived from an identification photo of the user; and in response to determining that the one or more image file contain at least a partial image of the person, modify a metadata value of the one or more image files to indicate that the one or more image files contain at least a partial image of the person and storing the one or more image files along with the modified metadata value in the inbox corresponding to the user's email account. 11. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: in response to determining that the image file in the email contains at least a partial image of the user, generate or modify an image gallery containing image files individually having at least a partial image of the user; and transmit, via the computer network, the generated or modified image galley to another user having access to the inbox on the one or more email servers. 12. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: in response to determining that the image file in the email contains at least a partial image of the user, transmit, via the computer network, a copy of the image file along with the inserted metadata value to a client device corresponding to another user having access to the inbox on the one or more email servers. 13. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: in response to determining that the image file in the email contains at least a partial image of the user, update an image search index to indicate that the stored image file contains at least a partial image of the user. 14. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: upon detecting that an email in the inbox contains an image file, retrieve, via the computer network, another image file known to contain an image of the user from the inbox on the one or more email servers; and wherein to determine whether the image file in the email contains at least a partial image of the user includes to determine, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the image of the user in the retrieved another image file. 15. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to repeating the retrieving and determining operations for additional identification photos of additional users in response to determining that the image file in the email does not contain at least a partial image of the user. 16. A method for automatic image classification in a computing system having one or more email servers interconnected to client devices by a computer network, the method comprising: determining whether an image file is included as attachment to an email received in an inbox corresponding to a user's email account; and in response to determining that an image file is included as attachment to the an email received in the inbox, applying facial recognition to determine whether the image file contains at least a partial image of a person based on one or more features of the person derived from a profile or identification photo of the person or another image file previously identified as containing an image of the person; and in response to determining that the image file contains at least a partial image of the person, modifying a metadata value of the image file to indicate that the image file contains at least a partial image of the person; and storing the image file along with the modified metadata value in the inbox corresponding to the user's email account. 17. The method of claim 16, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, generating or modifying an image gallery containing image files individually having at least a partial image of the user; and transmitting, via the computer network, the generated or modified image galley to another user having access to the inbox on the one or more email servers. 18. The method of claim 16, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, transmitting, via the computer network, a copy of the image file along with the inserted metadata value to a client device corresponding to another user having access to the inbox on the one or more email servers. 19. The method of claim 16, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, updating an image search index to indicate that the stored image file contains at least a partial image of the user. 20. The method of claim 16, further comprising: upon detecting that an email in the inbox contains an image file, retrieving, via the computer network, another image file known to contain an image of the user from the inbox on the one or more email servers; and wherein determining whether the image file in the email contains at least a partial image of the user includes determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the image of the user in the retrieved another image file.
Techniques of automatic image classification and modification in computing systems are disclosed herein. In one embodiment, a method includes scanning an inbox on email servers for emails containing image files. Upon detecting that an email in the inbox contains an image file, the method includes retrieving an identification photo of a user from a data store. The method also includes determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the retrieved identification photo. In response to determining that the image file in the email contains at least a partial image of the user, a metadata value is inserted into the image file indicating that the image file contains at least a partial image of the user before the image file is stored in the inbox on the one or more email servers.1. A method for automatic image classification in a computing system having one or more email servers interconnected to client devices by a computer network, the method comprising: scanning an inbox on the one or more email servers for emails containing one or more image files; and upon detecting that an email in the inbox contains an image file, retrieving, via the computer network, an identification photo of a user from a data store containing entries of user identifications and corresponding identification photos; determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the retrieved identification photo of the user; and in response to determining that the image file in the email contains at least a partial image of the user, inserting a metadata value into the image file indicating that the image file contains at least a partial image of the user and storing the image file along with the inserted metadata value in the inbox on the one or more email servers. 2. The method of claim 1, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, generating or modifying an image gallery containing image files individually having at least a partial image of the user; and transmitting, via the computer network, the generated or modified image galley to another user having access to the inbox on the one or more email servers. 3. The method of claim 1, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, transmitting, via the computer network, a copy of the image file along with the inserted metadata value to a client device corresponding to another user having access to the inbox on the one or more email servers. 4. The method of claim 1, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, updating an image search index to indicate that the stored image file contains at least a partial image of the user. 5. The method of claim 1, further comprising: upon detecting that an email in the inbox contains an image file, retrieving, via the computer network, another image file known to contain an image of the user from the inbox on the one or more email servers; and wherein determining whether the image file in the email contains at least a partial image of the user includes determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the image of the user in the retrieved another image file. 6. The method of claim 1 wherein determining, via facial recognition, whether the image file in the email contains at least a partial image of the user includes: extracting one or more landmarks from the retrieved identification photo of the user's face; and determining whether the extracted one or more landmarks are present in the image file. 7. The method of claim 1 wherein determining, via facial recognition, whether the image file in the email contains at least a partial image of the user includes: extracting one or more landmarks from the retrieved identification photo of the user's face; determining whether the extracted one or more landmarks are present in the image file; and indicating that the image file contains at least a partial image of the user in response to determining that the extracted one or more landmarks are present in the image file. 8. The method of claim 1, further comprising in response to determining that the image file in the email contains at least a partial image of the user, repeating the retrieving and determining operations for additional identification photos of additional users. 9. The method of claim 1 wherein scanning the inbox includes: determining whether the email in the inbox contains an attachment; in response to determining that the email in the inbox contains an attached, determining whether the attachment has a file extension indicating an image file; and in response to determining that the attachment has a file extension indicating an image file, indicating that the email contains an image file. 10. A computing device configured to be interconnected to one or more client devices by a computer network, the computing system comprising: a processor; and a memory operatively coupled to the processor, the memory containing instructions executable by the processor to cause the computing device to: determining whether one or more image files are included as attachment to emails in an inbox corresponding to a user's email account; and upon determining that one or more image files are included as attachment to the emails in the inbox, determining, via facial recognition, whether the one or more image files individually contain at least a partial image of a person based on one or more facial landmarks of the person derived from an identification photo of the user; and in response to determining that the one or more image file contain at least a partial image of the person, modify a metadata value of the one or more image files to indicate that the one or more image files contain at least a partial image of the person and storing the one or more image files along with the modified metadata value in the inbox corresponding to the user's email account. 11. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: in response to determining that the image file in the email contains at least a partial image of the user, generate or modify an image gallery containing image files individually having at least a partial image of the user; and transmit, via the computer network, the generated or modified image galley to another user having access to the inbox on the one or more email servers. 12. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: in response to determining that the image file in the email contains at least a partial image of the user, transmit, via the computer network, a copy of the image file along with the inserted metadata value to a client device corresponding to another user having access to the inbox on the one or more email servers. 13. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: in response to determining that the image file in the email contains at least a partial image of the user, update an image search index to indicate that the stored image file contains at least a partial image of the user. 14. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to: upon detecting that an email in the inbox contains an image file, retrieve, via the computer network, another image file known to contain an image of the user from the inbox on the one or more email servers; and wherein to determine whether the image file in the email contains at least a partial image of the user includes to determine, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the image of the user in the retrieved another image file. 15. The computing device of claim 10 wherein the memory contains additional instructions executable by the processor to cause the computing device to repeating the retrieving and determining operations for additional identification photos of additional users in response to determining that the image file in the email does not contain at least a partial image of the user. 16. A method for automatic image classification in a computing system having one or more email servers interconnected to client devices by a computer network, the method comprising: determining whether an image file is included as attachment to an email received in an inbox corresponding to a user's email account; and in response to determining that an image file is included as attachment to the an email received in the inbox, applying facial recognition to determine whether the image file contains at least a partial image of a person based on one or more features of the person derived from a profile or identification photo of the person or another image file previously identified as containing an image of the person; and in response to determining that the image file contains at least a partial image of the person, modifying a metadata value of the image file to indicate that the image file contains at least a partial image of the person; and storing the image file along with the modified metadata value in the inbox corresponding to the user's email account. 17. The method of claim 16, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, generating or modifying an image gallery containing image files individually having at least a partial image of the user; and transmitting, via the computer network, the generated or modified image galley to another user having access to the inbox on the one or more email servers. 18. The method of claim 16, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, transmitting, via the computer network, a copy of the image file along with the inserted metadata value to a client device corresponding to another user having access to the inbox on the one or more email servers. 19. The method of claim 16, further comprising: in response to determining that the image file in the email contains at least a partial image of the user, updating an image search index to indicate that the stored image file contains at least a partial image of the user. 20. The method of claim 16, further comprising: upon detecting that an email in the inbox contains an image file, retrieving, via the computer network, another image file known to contain an image of the user from the inbox on the one or more email servers; and wherein determining whether the image file in the email contains at least a partial image of the user includes determining, via facial recognition, whether the image file in the email contains at least a partial image of the user based on the image of the user in the retrieved another image file.
2,600
10,809
10,809
16,413,642
2,648
A high-frequency module includes a circuit board including wiring patterns, a resin on an active element mounted on the circuit board and a side of the circuit board and sealing the active element, and connection conductors penetrating the resin from a surface of the resin and provided on a top surface of the active element. The active element includes a first connection electrode on a surface facing the circuit board, and a second connection electrode on a top surface opposite to the surface facing the circuit board. The first connection electrode is connected to a wiring pattern on the circuit board, and the second connection electrode is connected to the connection conductor and an outer electrode and is not connected to the wiring pattern.
1. An active element, comprising: an RF wire disposed on a first surface of the active element, and through which a high-frequency signal is communicated; and a control wire disposed on a second surface of the active element opposite to the first surface, and through which a control signal is communicated. 2. A high-frequency module, comprising: a circuit board including a wiring pattern; the active element according to claim 1, mounted on the circuit board, and including a first connection electrode on one of the first and second surfaces of the active element facing the circuit board and a second connection electrode on another one of the first and second surfaces of the active element opposite to the one of the first and second surfaces facing the circuit board; a resin sealing the active element and a side of the circuit board on a surface of the circuit board at which the active element is mounted; an outer electrode provided on a surface of the resin opposite to a surface of the resin at which the circuit board is disposed, and including a first outer connection terminal and a second outer connection terminal; a first connection conductor penetrating the resin from the surface of the resin opposite to the surface at which the circuit board is disposed, and connecting the second connection electrode and the first outer connection terminal; and a second connection conductor connecting the wiring pattern and the second outer connection terminal; wherein a high-frequency signal is communicated through the first connection electrode; a control signal is communicated through the second connection electrode; the first connection electrode is connected to the wiring pattern; and the second connection electrode is connected to the first connection conductor without the wiring pattern interposed therebetween. 3. The high-frequency module according to claim 2, wherein electronic components are mounted on both surfaces of the circuit board. 4. The high-frequency module according to claim 2, further comprising a shield disposed above a surface of the circuit board opposite to the surface on which the active element is mounted. 5. The high-frequency module according to claim 4, wherein the shield is provided on a side surface of the circuit board and on a side surface of the resin. 6. The high-frequency module according to claim 4, further comprising another connection conductor penetrating the resin from the circuit board and connected to the shield. 7. The high-frequency module according to claim 2, wherein the second outer connection terminal penetrates the resin from a surface of the resin and is connected to the first connection electrode with the wiring pattern interposed therebetween; and a third connection conductor is connected to a ground potential between the first connection conductor and the second connection conductor in plan view. 8. The high-frequency module according to claim 2, wherein the active element is a switch IC. 9. The high-frequency module according to claim 8, wherein the switch IC includes a switch and a control circuit. 10. The high-frequency module according to claim 9, wherein the switch is a semiconductor switch. 11. A communication device, comprising: the high-frequency module according to claim 2. 12. The communication device according to claim 11, wherein electronic components are mounted on both surfaces of the circuit board. 13. The communication device according to claim 11, further comprising a shield disposed above a surface of the circuit board opposite to the surface on which the active element is mounted. 14. The communication device according to claim 13, wherein the shield is provided on a side surface of the circuit board and on a side surface of the resin. 15. The communication device according to claim 13, further comprising another connection conductor penetrating the resin from the circuit board and connected to the shield. 16. The communication device according to claim 11, wherein the second outer connection terminal penetrates the resin from a surface of the resin and is connected to the first connection electrode with the wiring pattern interposed therebetween; and a third connection conductor is connected to a ground potential between the first connection conductor and the second connection conductor in plan view. 17. The communication device according to claim 11, wherein the active element is a switch IC. 18. The communication device according to claim 17, wherein the switch IC includes a switch and a control circuit. 19. The communication device according to claim 18, wherein the switch is a semiconductor switch. 20. The communication device according to claim 11, further comprising: a radio frequency module; a radio frequency signal processing circuit; and a baseband signal processing circuit.
A high-frequency module includes a circuit board including wiring patterns, a resin on an active element mounted on the circuit board and a side of the circuit board and sealing the active element, and connection conductors penetrating the resin from a surface of the resin and provided on a top surface of the active element. The active element includes a first connection electrode on a surface facing the circuit board, and a second connection electrode on a top surface opposite to the surface facing the circuit board. The first connection electrode is connected to a wiring pattern on the circuit board, and the second connection electrode is connected to the connection conductor and an outer electrode and is not connected to the wiring pattern.1. An active element, comprising: an RF wire disposed on a first surface of the active element, and through which a high-frequency signal is communicated; and a control wire disposed on a second surface of the active element opposite to the first surface, and through which a control signal is communicated. 2. A high-frequency module, comprising: a circuit board including a wiring pattern; the active element according to claim 1, mounted on the circuit board, and including a first connection electrode on one of the first and second surfaces of the active element facing the circuit board and a second connection electrode on another one of the first and second surfaces of the active element opposite to the one of the first and second surfaces facing the circuit board; a resin sealing the active element and a side of the circuit board on a surface of the circuit board at which the active element is mounted; an outer electrode provided on a surface of the resin opposite to a surface of the resin at which the circuit board is disposed, and including a first outer connection terminal and a second outer connection terminal; a first connection conductor penetrating the resin from the surface of the resin opposite to the surface at which the circuit board is disposed, and connecting the second connection electrode and the first outer connection terminal; and a second connection conductor connecting the wiring pattern and the second outer connection terminal; wherein a high-frequency signal is communicated through the first connection electrode; a control signal is communicated through the second connection electrode; the first connection electrode is connected to the wiring pattern; and the second connection electrode is connected to the first connection conductor without the wiring pattern interposed therebetween. 3. The high-frequency module according to claim 2, wherein electronic components are mounted on both surfaces of the circuit board. 4. The high-frequency module according to claim 2, further comprising a shield disposed above a surface of the circuit board opposite to the surface on which the active element is mounted. 5. The high-frequency module according to claim 4, wherein the shield is provided on a side surface of the circuit board and on a side surface of the resin. 6. The high-frequency module according to claim 4, further comprising another connection conductor penetrating the resin from the circuit board and connected to the shield. 7. The high-frequency module according to claim 2, wherein the second outer connection terminal penetrates the resin from a surface of the resin and is connected to the first connection electrode with the wiring pattern interposed therebetween; and a third connection conductor is connected to a ground potential between the first connection conductor and the second connection conductor in plan view. 8. The high-frequency module according to claim 2, wherein the active element is a switch IC. 9. The high-frequency module according to claim 8, wherein the switch IC includes a switch and a control circuit. 10. The high-frequency module according to claim 9, wherein the switch is a semiconductor switch. 11. A communication device, comprising: the high-frequency module according to claim 2. 12. The communication device according to claim 11, wherein electronic components are mounted on both surfaces of the circuit board. 13. The communication device according to claim 11, further comprising a shield disposed above a surface of the circuit board opposite to the surface on which the active element is mounted. 14. The communication device according to claim 13, wherein the shield is provided on a side surface of the circuit board and on a side surface of the resin. 15. The communication device according to claim 13, further comprising another connection conductor penetrating the resin from the circuit board and connected to the shield. 16. The communication device according to claim 11, wherein the second outer connection terminal penetrates the resin from a surface of the resin and is connected to the first connection electrode with the wiring pattern interposed therebetween; and a third connection conductor is connected to a ground potential between the first connection conductor and the second connection conductor in plan view. 17. The communication device according to claim 11, wherein the active element is a switch IC. 18. The communication device according to claim 17, wherein the switch IC includes a switch and a control circuit. 19. The communication device according to claim 18, wherein the switch is a semiconductor switch. 20. The communication device according to claim 11, further comprising: a radio frequency module; a radio frequency signal processing circuit; and a baseband signal processing circuit.
2,600
10,810
10,810
15,342,095
2,649
A method of conserving power of a mobile device is provided. The method includes monitoring battery capacity of a mobile phone battery, switching off a radio of the mobile device when a battery capacity is below a predetermined threshold, and disabling at least one application, while allowing an application associated with a cellular call to continue enabled.
1. (canceled) 2. A method of operating a mobile device comprising: connecting a mobile device to a WIFI network and a cellular network; initiating a first connection over the WIFI network and a second connection over the cellular network, wherein initiating the first connection is in response to a request from an application, wherein initiating the second connection is in response to the request from the application, wherein the initiation of the second connection is independent of a state of the first connection; displaying an indication of availability of the WIFI network and the cellular network; accessing data through the first connection in response to the request from the application; detecting a condition indicative of time responsiveness of the first connection; accessing data in response to the request from the application through the second connection based on the detected condition. 3. A method of provisioning communication services in a network system, comprising: managing a state of each of a plurality of radio access networks; maintaining an indication of available services of each of the plurality of radio access networks; connecting to a primary radio access network of the plurality of radio access networks in an active state with a mobile device; registering the mobile device with a secondary radio access network of the plurality of radio access networks in a non-active state, wherein the network system is adapted to perform handover from the primary radio access network to the secondary radio access network based on registration of the mobile device with the secondary radio access network, wherein the primary radio access network comprises a wireless local area network and the secondary radio access network comprises a cellular network; and activating a packet data protocol with the secondary radio access network. 4. A mobile device comprising: a memory and a processor; and a network interface operable to: connect to a WIFI network and a cellular network; initiate a first connection over the WIFI network and a second connection over the cellular network, wherein the first connection is initiated in response to a request from an application, wherein the second connection is initiated in response to the request from the application, wherein the second connection is initiated independent of a state of the first connection; display an indication of availability of the WIFI network and the cellular network; access data through the first connection in response to the a request from the application; detect a condition indicative of time responsiveness of the first connection; access data in response to the request from the application through the second connection based on the detected condition. 5. A mobile device comprising: a network interface operable to: connect to a plurality of access networks; and a processor and memory associated with the network interface and operable to: manage a state of each of a plurality of radio access networks; maintain an indication of available services of each of the plurality of radio access networks; connect to a primary radio access network of the plurality of radio access networks in an active state with a mobile device; register the mobile device with a secondary radio access network of the plurality of radio access networks in a non-active state, wherein handover is performed from the primary radio access network to the secondary radio access network based on registration of the mobile device with the secondary radio access network, wherein the primary radio access network comprises a wireless local area network and the secondary radio access network comprises a cellular network; and activate a packet data protocol with the secondary radio access network. 6. The method of claim 2, wherein the first connection and the second connection are TCP connections. 7. The method of claim 2, wherein the time responsiveness is measured on an application-by-application basis for other applications executing on the mobile device. 8. The method of claim 2, wherein a time responsiveness of the first connection is measured for each request from the application. 9. The mobile device of claim 4, wherein the first connection and the second connection are TCP connections. 10. The mobile device of claim 4, wherein the time responsiveness is measured on an application-by-application basis for other applications executing on the mobile device. 11. The mobile device of claim 4, wherein a time responsiveness of the first connection is measured for each request from the application.
A method of conserving power of a mobile device is provided. The method includes monitoring battery capacity of a mobile phone battery, switching off a radio of the mobile device when a battery capacity is below a predetermined threshold, and disabling at least one application, while allowing an application associated with a cellular call to continue enabled.1. (canceled) 2. A method of operating a mobile device comprising: connecting a mobile device to a WIFI network and a cellular network; initiating a first connection over the WIFI network and a second connection over the cellular network, wherein initiating the first connection is in response to a request from an application, wherein initiating the second connection is in response to the request from the application, wherein the initiation of the second connection is independent of a state of the first connection; displaying an indication of availability of the WIFI network and the cellular network; accessing data through the first connection in response to the request from the application; detecting a condition indicative of time responsiveness of the first connection; accessing data in response to the request from the application through the second connection based on the detected condition. 3. A method of provisioning communication services in a network system, comprising: managing a state of each of a plurality of radio access networks; maintaining an indication of available services of each of the plurality of radio access networks; connecting to a primary radio access network of the plurality of radio access networks in an active state with a mobile device; registering the mobile device with a secondary radio access network of the plurality of radio access networks in a non-active state, wherein the network system is adapted to perform handover from the primary radio access network to the secondary radio access network based on registration of the mobile device with the secondary radio access network, wherein the primary radio access network comprises a wireless local area network and the secondary radio access network comprises a cellular network; and activating a packet data protocol with the secondary radio access network. 4. A mobile device comprising: a memory and a processor; and a network interface operable to: connect to a WIFI network and a cellular network; initiate a first connection over the WIFI network and a second connection over the cellular network, wherein the first connection is initiated in response to a request from an application, wherein the second connection is initiated in response to the request from the application, wherein the second connection is initiated independent of a state of the first connection; display an indication of availability of the WIFI network and the cellular network; access data through the first connection in response to the a request from the application; detect a condition indicative of time responsiveness of the first connection; access data in response to the request from the application through the second connection based on the detected condition. 5. A mobile device comprising: a network interface operable to: connect to a plurality of access networks; and a processor and memory associated with the network interface and operable to: manage a state of each of a plurality of radio access networks; maintain an indication of available services of each of the plurality of radio access networks; connect to a primary radio access network of the plurality of radio access networks in an active state with a mobile device; register the mobile device with a secondary radio access network of the plurality of radio access networks in a non-active state, wherein handover is performed from the primary radio access network to the secondary radio access network based on registration of the mobile device with the secondary radio access network, wherein the primary radio access network comprises a wireless local area network and the secondary radio access network comprises a cellular network; and activate a packet data protocol with the secondary radio access network. 6. The method of claim 2, wherein the first connection and the second connection are TCP connections. 7. The method of claim 2, wherein the time responsiveness is measured on an application-by-application basis for other applications executing on the mobile device. 8. The method of claim 2, wherein a time responsiveness of the first connection is measured for each request from the application. 9. The mobile device of claim 4, wherein the first connection and the second connection are TCP connections. 10. The mobile device of claim 4, wherein the time responsiveness is measured on an application-by-application basis for other applications executing on the mobile device. 11. The mobile device of claim 4, wherein a time responsiveness of the first connection is measured for each request from the application.
2,600
10,811
10,811
16,166,395
2,612
Aspects of the present disclosure provide a method of navigating a Virtual Reality (VR) environment. The method includes providing the VR environment, receiving, from a user, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point in the VR environment, generating a plurality of waypoints along the user-drawn path, moving the user to a first waypoint of the plurality of waypoints, pausing the user at the first waypoint, and moving, responsive to a criterion being met, the user to a second waypoint of the plurality of waypoints.
1. A method of navigating a Virtual Reality (VR) environment, the method comprising: providing the VR environment; receiving, from a user, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point in the VR environment; generating a plurality of waypoints along the user-drawn path; moving the user to a first waypoint of the plurality of waypoints; pausing the user at the first waypoint; and moving, responsive to a criterion being met, the user to a second waypoint of the plurality of waypoints. 2. The method of claim 1, further comprising sequentially moving the user to a first subsequent waypoint of the plurality of waypoints, pausing the user at the first subsequent waypoint, and moving, responsive to the criterion being met, the user to a second subsequent waypoint until the user is moved to the terminal point. 3. The method of claim 2, wherein the criterion being met includes at least one of: determining that an amount of time has elapsed; and receiving a user input to move to a next waypoint. 4. The method of claim 3, further comprising receiving, before the criterion has been met, input from a user indicative of an interaction with an entity in the VR environment. 5. The method of claim 1, wherein receiving the inputs from the user indicative of the user-drawn path includes receiving, from a user input device, a location input indicative of a position on a surface of the VR environment, and wherein generating the plurality of waypoints includes: determining that the position on the surface is above a threshold distance from a most-recently generated waypoint; and generating an additional waypoint at the position on the surface. 6. The method of claim 5, wherein receiving, from the user input device, the location input includes receiving, from a handheld motion controller, the location input. 7. The method of claim 1, wherein moving the user includes dashing the user. 8. A system for navigating a Virtual Reality (VR) environment, the system comprising: a display configured to display a view of the VR environment; a user input device; and a controller coupled to the display and the user input device, the controller being configured to: generate the VR environment; receive, from the user input device, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point; generate a plurality of waypoints along the user-drawn path; control the display to move the view of the VR environment to a first waypoint of the plurality of waypoints; control the display to pause the view of the VR environment at the first waypoint; and control the display, responsive to determining that a criterion has been met, to move the view of the VR environment to a second waypoint of the plurality of waypoints. 9. The system of claim 8, wherein the controller is further configured to control the display to sequentially move the view of the VR environment to a first subsequent waypoint of the plurality of waypoints, pause the user at the first subsequent waypoint, and move, responsive to determining that the criterion has been met, the user to a second subsequent waypoint until the view of the VR environment is moved to the terminal point. 10. The system of claim 9, wherein determining that the criterion has been met includes at least one of: determining that an amount of time has elapsed; and receiving, from the user input device, a user input to move to a next waypoint. 11. The system of claim 10, wherein the controller is further configured to receive, before the criterion has been met, an interaction input from a user indicative of an interaction with an entity in the VR environment. 12. The system of claim 8, wherein receiving the inputs includes receiving, from the user input device, a location input indicative of a position on a surface of the VR environment, and wherein generating the plurality of waypoints includes: determining that the position on the surface is above a threshold distance from a most-recently generated waypoint; and generating an additional waypoint at the position on the surface. 13. The system of claim 12, wherein the user input device includes at least one of a handheld motion controller and a motion capture device. 14. A non-transitory computer-readable medium storing sequences of computer-executable instructions for navigating a Virtual Reality (VR) environment, the sequences of computer-executable instructions including instructions that instruct at least one processor to: provide the VR environment; receive, from a user, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point; generate a plurality of waypoints along the user-drawn path; move the user to a first waypoint of the plurality of waypoints; pause the user at the first waypoint; and move, responsive to a criterion being met, the user to a second waypoint of the plurality of waypoints. 15. The non-transitory computer-readable medium of claim 14, wherein the sequences of computer-executable instructions are further configured to instruct at least one processor to sequentially move the user to a first subsequent waypoint of the plurality of waypoints, pause the user at the first subsequent waypoint, and move, responsive to the criterion being met, the user to a second subsequent waypoint until the user is moved to the terminal point. 16. The non-transitory computer-readable medium of claim 15, wherein the criterion being met includes at least one of: determining, by the processor, that an amount of time has elapsed; and receiving a user input to move to a next waypoint. 17. The non-transitory computer-readable medium of claim 16, wherein the at least one processor is further configured to receive, before the criterion has been met, an interaction input from a user indicative of an interaction with an entity in the VR environment. 18. The non-transitory computer-readable medium of claim 14, wherein receiving the inputs from the user indicative of the user-drawn path includes receiving, from a user input device, a location input indicative of a position on a surface of the VR environment, and wherein the instructions that instruct at least one processor to generate the plurality of waypoints include instructions that instruct the at least one processor to: determine that the position on the surface is above a threshold distance from a most-recently generated waypoint; and generate an additional waypoint at the position on the surface. 19. The non-transitory computer-readable medium of claim 18, wherein receiving, from the user input device, the location input includes receiving, from a handheld motion controller, the location input. 20. The non-transitory computer-readable medium of claim 14, wherein moving the user includes dashing the user.
Aspects of the present disclosure provide a method of navigating a Virtual Reality (VR) environment. The method includes providing the VR environment, receiving, from a user, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point in the VR environment, generating a plurality of waypoints along the user-drawn path, moving the user to a first waypoint of the plurality of waypoints, pausing the user at the first waypoint, and moving, responsive to a criterion being met, the user to a second waypoint of the plurality of waypoints.1. A method of navigating a Virtual Reality (VR) environment, the method comprising: providing the VR environment; receiving, from a user, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point in the VR environment; generating a plurality of waypoints along the user-drawn path; moving the user to a first waypoint of the plurality of waypoints; pausing the user at the first waypoint; and moving, responsive to a criterion being met, the user to a second waypoint of the plurality of waypoints. 2. The method of claim 1, further comprising sequentially moving the user to a first subsequent waypoint of the plurality of waypoints, pausing the user at the first subsequent waypoint, and moving, responsive to the criterion being met, the user to a second subsequent waypoint until the user is moved to the terminal point. 3. The method of claim 2, wherein the criterion being met includes at least one of: determining that an amount of time has elapsed; and receiving a user input to move to a next waypoint. 4. The method of claim 3, further comprising receiving, before the criterion has been met, input from a user indicative of an interaction with an entity in the VR environment. 5. The method of claim 1, wherein receiving the inputs from the user indicative of the user-drawn path includes receiving, from a user input device, a location input indicative of a position on a surface of the VR environment, and wherein generating the plurality of waypoints includes: determining that the position on the surface is above a threshold distance from a most-recently generated waypoint; and generating an additional waypoint at the position on the surface. 6. The method of claim 5, wherein receiving, from the user input device, the location input includes receiving, from a handheld motion controller, the location input. 7. The method of claim 1, wherein moving the user includes dashing the user. 8. A system for navigating a Virtual Reality (VR) environment, the system comprising: a display configured to display a view of the VR environment; a user input device; and a controller coupled to the display and the user input device, the controller being configured to: generate the VR environment; receive, from the user input device, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point; generate a plurality of waypoints along the user-drawn path; control the display to move the view of the VR environment to a first waypoint of the plurality of waypoints; control the display to pause the view of the VR environment at the first waypoint; and control the display, responsive to determining that a criterion has been met, to move the view of the VR environment to a second waypoint of the plurality of waypoints. 9. The system of claim 8, wherein the controller is further configured to control the display to sequentially move the view of the VR environment to a first subsequent waypoint of the plurality of waypoints, pause the user at the first subsequent waypoint, and move, responsive to determining that the criterion has been met, the user to a second subsequent waypoint until the view of the VR environment is moved to the terminal point. 10. The system of claim 9, wherein determining that the criterion has been met includes at least one of: determining that an amount of time has elapsed; and receiving, from the user input device, a user input to move to a next waypoint. 11. The system of claim 10, wherein the controller is further configured to receive, before the criterion has been met, an interaction input from a user indicative of an interaction with an entity in the VR environment. 12. The system of claim 8, wherein receiving the inputs includes receiving, from the user input device, a location input indicative of a position on a surface of the VR environment, and wherein generating the plurality of waypoints includes: determining that the position on the surface is above a threshold distance from a most-recently generated waypoint; and generating an additional waypoint at the position on the surface. 13. The system of claim 12, wherein the user input device includes at least one of a handheld motion controller and a motion capture device. 14. A non-transitory computer-readable medium storing sequences of computer-executable instructions for navigating a Virtual Reality (VR) environment, the sequences of computer-executable instructions including instructions that instruct at least one processor to: provide the VR environment; receive, from a user, inputs indicative of a user-drawn path in the VR environment, the user-drawn path originating at a position of the user and terminating at a terminal point; generate a plurality of waypoints along the user-drawn path; move the user to a first waypoint of the plurality of waypoints; pause the user at the first waypoint; and move, responsive to a criterion being met, the user to a second waypoint of the plurality of waypoints. 15. The non-transitory computer-readable medium of claim 14, wherein the sequences of computer-executable instructions are further configured to instruct at least one processor to sequentially move the user to a first subsequent waypoint of the plurality of waypoints, pause the user at the first subsequent waypoint, and move, responsive to the criterion being met, the user to a second subsequent waypoint until the user is moved to the terminal point. 16. The non-transitory computer-readable medium of claim 15, wherein the criterion being met includes at least one of: determining, by the processor, that an amount of time has elapsed; and receiving a user input to move to a next waypoint. 17. The non-transitory computer-readable medium of claim 16, wherein the at least one processor is further configured to receive, before the criterion has been met, an interaction input from a user indicative of an interaction with an entity in the VR environment. 18. The non-transitory computer-readable medium of claim 14, wherein receiving the inputs from the user indicative of the user-drawn path includes receiving, from a user input device, a location input indicative of a position on a surface of the VR environment, and wherein the instructions that instruct at least one processor to generate the plurality of waypoints include instructions that instruct the at least one processor to: determine that the position on the surface is above a threshold distance from a most-recently generated waypoint; and generate an additional waypoint at the position on the surface. 19. The non-transitory computer-readable medium of claim 18, wherein receiving, from the user input device, the location input includes receiving, from a handheld motion controller, the location input. 20. The non-transitory computer-readable medium of claim 14, wherein moving the user includes dashing the user.
2,600
10,812
10,812
16,297,663
2,651
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.
1.-20. (canceled) 21. A method that provides binaural sound to a listener watching a feature length movie in a virtual reality (VR) movie theater with a head mounted display (HMD), the method comprising: displaying, with the HMD worn by the listener, the VR movie theater having VR seats and a VR movie screen where the listener sits in one of the VR seats to watch the feature length movie on the VR movie screen; tracking, with the HMD worn by the listener, head orientations of the listener with respect to the VR movie screen while the listener watches the feature length movie on the VR movie screen; selecting, with the HMD worn by the listener, head-related transfer functions (HRTFs) based on the head orientations of the listener with respect to the VR movie screen while the listener watches the feature length movie on the VR movie screen; and processing, with one or more processors in the HMD worn by the listener, sound of the feature length movie with the HRTFs so the sound externally localizes to the listener as the binaural sound with a sound localization point (SLP) in empty space on the VR movie screen. 22. The method of claim 21 further comprising: changing, during the feature length movie, an audial point-of-view of the sound provided to the listener by changing SLPs from one character in the feature length movie to another character in the feature length movie. 23. The method of claim 21 further comprising: changing, during the feature length movie, an audial point-of-view of the sound provided to the listener by changing the sound from the binaural sound with the SLP in empty space on the VR movie screen to stereo sound with a SLP inside a head of the listener. 24. The method of claim 21 further comprising: providing, with the HMD and to the listener, different locations in the feature length movie that the listener can select to hear the sound as audial points-of-view with different SLPs that localize as the binaural sound. 25. The method of claim 21 further comprising: displaying, with the HMD, a visual indication on a character in the feature length movie to indicate to the listener that the character is currently selected as an audial viewpoint for the sound. 26. The method of claim 21 further comprising: receiving, from the listener, selection of a character in the feature length movie; and providing, with the HMD, the feature length movie to the listener with the binaural sound so the listener hears the sound as if the listener were the character selected in the feature length movie. 27. The method of claim 21 further comprising: providing, with the HMD, the sound to the listener from a point-of-view of a character in the feature length movie such that the listener hears the sound at relative locations where the character hears the sound. 28. A non-transitory computer-readable storage medium that stores instructions that one or more electronic devices execute as a method that provides three-dimensional (3D) sound to a listener watching a movie in a virtual reality (VR) movie theater while wearing a head mounted display (HMD), the method comprising: displaying, with the HMD worn by the listener, the listener in the VR movie theater seated at a VR seat to watch the movie on a VR movie screen; tracking head orientations of the listener with respect to the VR movie screen while the listener watches the movie on the VR movie screen; selecting different pairs of head-related transfer functions (HRTFs) as the head orientations of the listener change while the listener watches the movie on the VR movie screen; and processing sound of the movie with the different pairs of HRTFs so the sound continues to externally localize to the listener as the binaural sound with a sound localization point (SLP) in empty space on the VR movie screen while the head orientations of the listener change. 29. The non-transitory computer-readable storage medium of claim 28 further comprising: processing the sound of the movie to externally localize to the listener as the binaural sound that originates from a location of a character in the movie as character moves across the VR movie screen, wherein the SLP follows movement of the character as the location of the character moves across the VR movie screen. 30. The non-transitory computer-readable storage medium of claim 28 further comprising: determining an angle of the VR seat with respect to the VR screen; and selecting, based on the angle, the different pairs of HRTFs to process the sound of the movie. 31. The non-transitory computer-readable storage medium of claim 28 further comprising: enabling the listener to be immersed in a space of the movie by processing the sound from the movie so SLPs of the sound occur at locations in empty space around a head of the listener as if the listener were at a location in a scene of the movie. 32. The non-transitory computer-readable storage medium of claim 28 further comprising: tracking the head orientations of the listener with respect to an image of a character that appears in the movie; and processing the sound of the movie so the SLP of the sound follows the image of the character as the image of the character moves across the VR movie screen. 33. The non-transitory computer-readable storage medium of claim 28 further comprising: processing the sound such that a first sound externally localizes as the binaural sound with a first SLP in empty space at a first location on the VR movie screen and such that a second sound externally localizes as the binaural sound with a second SLP in empty space at a second location behind a head of the listener. 34. The non-transitory computer-readable storage medium of claim 28 further comprising: processing the sound of the movie with the different pairs of HRTFs so the sound externally localizes behind a head of the listener while the listener views the VR movie screen located in front of the head of the listener. 35. A head mounted display (HMD) that provides binaural sound to a listener watching a movie in a virtual reality (VR) movie theater, the HMD comprising: a memory that stores instructions and a feature length movie; a display that displays the VR movie theater with the listener seated in a VR seat to watch the feature length movie on a VR movie screen; head tracking that tracks head orientations of the listener with respect to the VR movie screen; and a processor that executes the instructions to process sound of the feature length movie with the different head-related transfer functions (HRTFs) so the sound continues to externally localize to the listener as the binaural sound with a sound localization point (SLP) in empty space on the VR movie screen while the head orientations of the listener change. 36. The HMD of claim 35 wherein the processor further executes the instructions to determine a distance from the VR seat where the listener is seated to a character displayed on the VR movie screen, and to adjust a loudness of a voice of the character based on the distance. 37. The HMD of claim 35 wherein the processor further executes the instructions to display a list of different characters in the feature length movie that are available as audial points-of-view such that when the listener selects one of the different characters then the listener hears the sound from a point-of-view of the one of the different characters that the listener selected. 38. The HMD of claim 35 wherein the processor further executes the instructions to process the sound of the feature length movie so the listener hears the sound from a point-of-view of a character in the feature length movie as if the listener were at locations of the character in the feature length movie as the character moves about in scenes in the feature length movie. 39. The HMD of claim 35 wherein the processor further executes the instructions to determine an angle from the VR seat where the listener is seated to the VR movie screen and to select, based on the angle and the head orientations, the different HRTFs so the sound externally localizes as the binaural sound in empty space at the VR movie screen 40. The HMD of claim 35 wherein the processor further executes the instructions to distinguish a voice of a narrator in the feature length movie from a voice of a character in the feature length movie by providing the voice of the narrator in stereo sound that internally localizes inside a head of the listener and the voice of the character as the 3D sound that externally localizes outside the head of the listener.
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.1.-20. (canceled) 21. A method that provides binaural sound to a listener watching a feature length movie in a virtual reality (VR) movie theater with a head mounted display (HMD), the method comprising: displaying, with the HMD worn by the listener, the VR movie theater having VR seats and a VR movie screen where the listener sits in one of the VR seats to watch the feature length movie on the VR movie screen; tracking, with the HMD worn by the listener, head orientations of the listener with respect to the VR movie screen while the listener watches the feature length movie on the VR movie screen; selecting, with the HMD worn by the listener, head-related transfer functions (HRTFs) based on the head orientations of the listener with respect to the VR movie screen while the listener watches the feature length movie on the VR movie screen; and processing, with one or more processors in the HMD worn by the listener, sound of the feature length movie with the HRTFs so the sound externally localizes to the listener as the binaural sound with a sound localization point (SLP) in empty space on the VR movie screen. 22. The method of claim 21 further comprising: changing, during the feature length movie, an audial point-of-view of the sound provided to the listener by changing SLPs from one character in the feature length movie to another character in the feature length movie. 23. The method of claim 21 further comprising: changing, during the feature length movie, an audial point-of-view of the sound provided to the listener by changing the sound from the binaural sound with the SLP in empty space on the VR movie screen to stereo sound with a SLP inside a head of the listener. 24. The method of claim 21 further comprising: providing, with the HMD and to the listener, different locations in the feature length movie that the listener can select to hear the sound as audial points-of-view with different SLPs that localize as the binaural sound. 25. The method of claim 21 further comprising: displaying, with the HMD, a visual indication on a character in the feature length movie to indicate to the listener that the character is currently selected as an audial viewpoint for the sound. 26. The method of claim 21 further comprising: receiving, from the listener, selection of a character in the feature length movie; and providing, with the HMD, the feature length movie to the listener with the binaural sound so the listener hears the sound as if the listener were the character selected in the feature length movie. 27. The method of claim 21 further comprising: providing, with the HMD, the sound to the listener from a point-of-view of a character in the feature length movie such that the listener hears the sound at relative locations where the character hears the sound. 28. A non-transitory computer-readable storage medium that stores instructions that one or more electronic devices execute as a method that provides three-dimensional (3D) sound to a listener watching a movie in a virtual reality (VR) movie theater while wearing a head mounted display (HMD), the method comprising: displaying, with the HMD worn by the listener, the listener in the VR movie theater seated at a VR seat to watch the movie on a VR movie screen; tracking head orientations of the listener with respect to the VR movie screen while the listener watches the movie on the VR movie screen; selecting different pairs of head-related transfer functions (HRTFs) as the head orientations of the listener change while the listener watches the movie on the VR movie screen; and processing sound of the movie with the different pairs of HRTFs so the sound continues to externally localize to the listener as the binaural sound with a sound localization point (SLP) in empty space on the VR movie screen while the head orientations of the listener change. 29. The non-transitory computer-readable storage medium of claim 28 further comprising: processing the sound of the movie to externally localize to the listener as the binaural sound that originates from a location of a character in the movie as character moves across the VR movie screen, wherein the SLP follows movement of the character as the location of the character moves across the VR movie screen. 30. The non-transitory computer-readable storage medium of claim 28 further comprising: determining an angle of the VR seat with respect to the VR screen; and selecting, based on the angle, the different pairs of HRTFs to process the sound of the movie. 31. The non-transitory computer-readable storage medium of claim 28 further comprising: enabling the listener to be immersed in a space of the movie by processing the sound from the movie so SLPs of the sound occur at locations in empty space around a head of the listener as if the listener were at a location in a scene of the movie. 32. The non-transitory computer-readable storage medium of claim 28 further comprising: tracking the head orientations of the listener with respect to an image of a character that appears in the movie; and processing the sound of the movie so the SLP of the sound follows the image of the character as the image of the character moves across the VR movie screen. 33. The non-transitory computer-readable storage medium of claim 28 further comprising: processing the sound such that a first sound externally localizes as the binaural sound with a first SLP in empty space at a first location on the VR movie screen and such that a second sound externally localizes as the binaural sound with a second SLP in empty space at a second location behind a head of the listener. 34. The non-transitory computer-readable storage medium of claim 28 further comprising: processing the sound of the movie with the different pairs of HRTFs so the sound externally localizes behind a head of the listener while the listener views the VR movie screen located in front of the head of the listener. 35. A head mounted display (HMD) that provides binaural sound to a listener watching a movie in a virtual reality (VR) movie theater, the HMD comprising: a memory that stores instructions and a feature length movie; a display that displays the VR movie theater with the listener seated in a VR seat to watch the feature length movie on a VR movie screen; head tracking that tracks head orientations of the listener with respect to the VR movie screen; and a processor that executes the instructions to process sound of the feature length movie with the different head-related transfer functions (HRTFs) so the sound continues to externally localize to the listener as the binaural sound with a sound localization point (SLP) in empty space on the VR movie screen while the head orientations of the listener change. 36. The HMD of claim 35 wherein the processor further executes the instructions to determine a distance from the VR seat where the listener is seated to a character displayed on the VR movie screen, and to adjust a loudness of a voice of the character based on the distance. 37. The HMD of claim 35 wherein the processor further executes the instructions to display a list of different characters in the feature length movie that are available as audial points-of-view such that when the listener selects one of the different characters then the listener hears the sound from a point-of-view of the one of the different characters that the listener selected. 38. The HMD of claim 35 wherein the processor further executes the instructions to process the sound of the feature length movie so the listener hears the sound from a point-of-view of a character in the feature length movie as if the listener were at locations of the character in the feature length movie as the character moves about in scenes in the feature length movie. 39. The HMD of claim 35 wherein the processor further executes the instructions to determine an angle from the VR seat where the listener is seated to the VR movie screen and to select, based on the angle and the head orientations, the different HRTFs so the sound externally localizes as the binaural sound in empty space at the VR movie screen 40. The HMD of claim 35 wherein the processor further executes the instructions to distinguish a voice of a narrator in the feature length movie from a voice of a character in the feature length movie by providing the voice of the narrator in stereo sound that internally localizes inside a head of the listener and the voice of the character as the 3D sound that externally localizes outside the head of the listener.
2,600
10,813
10,813
16,163,925
2,643
Disclosed is a location information determining method and system for providing a variety of services based on a location. The location information determining method includes receiving cell information; and determining location information that matches the cell information as location information of a mobile terminal from a location information database that stores location information that matches a plurality of pieces of cell information, respectively.
1. A method of determining location information based on cell information, the method comprising: receiving, using at least one processor, cell information related to a base station servicing at least one mobile terminal; determining, using the at least one processor, approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information; and transmitting, using the at least one processor, a location-based service to the at least one mobile terminal based on the determined approximate location information, wherein the determining the approximate location information based on the cell information includes, determining at least one neighboring base station adjacent to the base station servicing the at least one mobile terminal based on the cell information, estimating a location of the base station servicing the at least one mobile terminal based on cell identifier information of the at least one neighboring base station, calculating a centroid value of a cell covered by the base station servicing the at least one mobile terminal based on a cell shape corresponding to the estimated location of the base station, and determining location information corresponding to the centroid value of the cell as location information of the at least one mobile terminal. 2. The method of claim 1, wherein the approximate location information includes location coordinates corresponding to a centroid value of the base station servicing the at least one mobile terminal calculated based on a cell shape of the base station servicing the at least one mobile terminal. 3. The method of claim 1, wherein the determining the approximate location information based on the cell information comprises determining administrative district information of a region corresponding to the approximate location information. 4. The method of claim 1, further comprising: matching, using the at least one processor, location information corresponding to the centroid value of the cell and cell information of the base station servicing the at least one mobile terminal; and adding, using the at least one processor, the matching information to the location information database. 5. The method of claim 1, wherein the approximate location information includes current location coordinates or approximate location coordinates of the mobile terminal in a cell covered by the base station servicing the at least one mobile terminal. 6. The method of claim 1, wherein the location-based service is at least one of: a weather information providing service, a discount coupon providing service, an information providing service, a restaurant related information providing service, a financial related service, a music providing service, and a direction providing service. 7. The method of claim 1, further comprising: receiving, using the at least one processor, global positioning system (GPS) information or additional location information of the at least one mobile terminal from a satellite; and updating, using the at least one processor, the approximate location information of the at least one mobile terminal based on the GPS information or the additional location information of the at least one mobile terminal. 8. The method of claim 1, wherein the cell information includes at least one of identification information of a country in which the at least one mobile terminal is located, communication company identification information, location area code (LAC) information, identification information of a base station servicing the at least one mobile terminal, or identification information of a cell covered by the base station. 9. The method of claim 8, wherein the determining the approximate location information based on the cell information comprises displaying a region to which the at least one mobile terminal belongs to be refined as coverage of the region becomes narrower. 10. The method of claim 1, wherein the determining the approximate location information based on the cell information comprises determining location information that matches previous cell information as approximate location information of the at least one mobile terminal in response to absence of current cell information of the at least one mobile terminal. 11. A location information determining system comprising: a memory having computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, receive cell information related to a base station servicing at least one mobile terminal, determine approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information, and transmitting, using the at least one processor, a location-based service to the at least one mobile terminal based on the determined approximate location information, wherein the determining the approximate location information based on the cell information includes, determining at least one neighboring base station adjacent to the base station servicing the at least one mobile terminal based on the cell information, estimating a location of the base station servicing the at least one mobile terminal based on cell identifier information of the at least one neighboring base station, calculating a centroid value of a cell covered by the base station servicing the at least one mobile terminal based on a cell shape corresponding to the estimated location of the base station, and determining location information corresponding to the centroid value of the cell as location information of the at least one mobile terminal. 12. The location information determining system of claim 11, wherein the approximate location information includes location coordinates corresponding to a centroid value of the base station servicing the at least one mobile terminal calculated based on a cell shape of the base station servicing the at least one mobile terminal. 13. The location information determining system of claim 11, wherein the at least one processor is further configured to: determine administrative district information of a region corresponding to the approximate location information. 14. The location information determining system of claim 11, wherein the at least one processor is further configured to: match location information corresponding to a centroid value of the cell and cell information of the base station servicing the at least one mobile terminal; and add the matching information to the location information database. 15. The location information determining system of claim 11, wherein the approximate location information includes a current location coordinates or an approximate location coordinates of the at least one mobile terminal in a cell covered by the base station servicing the at least one mobile terminal. 16. The location information determining system of claim 11, wherein the location-based service is at least one of: a weather information providing service, a discount coupon providing service, an information providing service, a restaurant related information providing service, a financial related service, a music providing service, and a direction providing service. 17. The location information determining system of claim 11, wherein the location information of the at least one mobile terminal is updated based on global positioning system (GPS) information or additional location information of the at least one mobile terminal. 18. A file distribution system for distributing an installation file for installing an application on a mobile terminal of a user, the file distribution system comprising: a memory having computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, store and manage the installation file; and transmit the installation file to the mobile terminal in response to a request from the mobile terminal, wherein the application is configured to, control the mobile terminal to receive cell information related to a base station servicing the mobile terminal, control the mobile terminal to determine approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information, control the mobile terminal to receive a location-based service to the mobile terminal based on the determined approximate location information, and control the mobile terminal to display the determined approximate location information of the mobile terminal and the location-based service, wherein the controlling the mobile terminal to determine the approximate location information based on the cell information includes, controlling the mobile terminal to determine at least one neighboring base station adjacent to the base station servicing the mobile terminal based on the cell information, controlling the mobile terminal to estimate a location of the base station servicing the mobile terminal based on cell identifier information of the at least one neighboring base station, controlling the mobile terminal to calculate a centroid value of a cell covered by the base station servicing the mobile terminal based on a cell shape corresponding to the estimated location of the base station, and controlling the mobile terminal to determine location information corresponding to the centroid value of the cell as location information of the mobile terminal. 19. A mobile terminal, comprising: a memory having computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, connect to a base station associated with a data network, receive cell information from a base station, the cell information including at least one of location area code (LAC) information, identification information of the base station, coverage area information related to the area covered by the base station, transmit a request for a location-based service, the request including the received cell information and a mobile terminal identifier, wherein the request causes a server to, determine approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information; and the at least one processor is further configured to receive a location-based service in response to the request, wherein the determining the approximate location information based on the cell information includes, determining at least one neighboring base station adjacent to the base station servicing the mobile terminal based on the cell information, estimating a location of the base station servicing the mobile terminal based on cell identifier information of the at least one neighboring base station, calculating a centroid value of a cell covered by the base station servicing the mobile terminal based on a cell shape corresponding to the estimated location of the base station, and determining location information corresponding to the centroid value of the cell as location information of the mobile terminal. 20. The mobile terminal of claim 19, wherein the at least one processor is further configured to perform the receive the location-based service by: determining administrative district information based on the received cell information and centroid information stored in the location information database associated with the base station; storing the administrative district information in the location information database in association with the mobile terminal identifier; extracting approximate location information related to the mobile terminal based on the determined gradation level and the stored administrative district information; and receiving the location-based service, the location-based service provided based on the extracted approximate location information. 21. The mobile terminal of claim 19, wherein the location-based service is at least one of a weather information providing service, a discount coupon providing service, an information providing service, a restaurant related information providing service, a financial related service, a music providing service, and a navigation service. 22. The mobile terminal of claim 19, wherein the at least one processor is further configured to: update the cell information upon receiving a user instruction to request the location-based service; and update the cell information when the mobile terminal connects to a second base station.
Disclosed is a location information determining method and system for providing a variety of services based on a location. The location information determining method includes receiving cell information; and determining location information that matches the cell information as location information of a mobile terminal from a location information database that stores location information that matches a plurality of pieces of cell information, respectively.1. A method of determining location information based on cell information, the method comprising: receiving, using at least one processor, cell information related to a base station servicing at least one mobile terminal; determining, using the at least one processor, approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information; and transmitting, using the at least one processor, a location-based service to the at least one mobile terminal based on the determined approximate location information, wherein the determining the approximate location information based on the cell information includes, determining at least one neighboring base station adjacent to the base station servicing the at least one mobile terminal based on the cell information, estimating a location of the base station servicing the at least one mobile terminal based on cell identifier information of the at least one neighboring base station, calculating a centroid value of a cell covered by the base station servicing the at least one mobile terminal based on a cell shape corresponding to the estimated location of the base station, and determining location information corresponding to the centroid value of the cell as location information of the at least one mobile terminal. 2. The method of claim 1, wherein the approximate location information includes location coordinates corresponding to a centroid value of the base station servicing the at least one mobile terminal calculated based on a cell shape of the base station servicing the at least one mobile terminal. 3. The method of claim 1, wherein the determining the approximate location information based on the cell information comprises determining administrative district information of a region corresponding to the approximate location information. 4. The method of claim 1, further comprising: matching, using the at least one processor, location information corresponding to the centroid value of the cell and cell information of the base station servicing the at least one mobile terminal; and adding, using the at least one processor, the matching information to the location information database. 5. The method of claim 1, wherein the approximate location information includes current location coordinates or approximate location coordinates of the mobile terminal in a cell covered by the base station servicing the at least one mobile terminal. 6. The method of claim 1, wherein the location-based service is at least one of: a weather information providing service, a discount coupon providing service, an information providing service, a restaurant related information providing service, a financial related service, a music providing service, and a direction providing service. 7. The method of claim 1, further comprising: receiving, using the at least one processor, global positioning system (GPS) information or additional location information of the at least one mobile terminal from a satellite; and updating, using the at least one processor, the approximate location information of the at least one mobile terminal based on the GPS information or the additional location information of the at least one mobile terminal. 8. The method of claim 1, wherein the cell information includes at least one of identification information of a country in which the at least one mobile terminal is located, communication company identification information, location area code (LAC) information, identification information of a base station servicing the at least one mobile terminal, or identification information of a cell covered by the base station. 9. The method of claim 8, wherein the determining the approximate location information based on the cell information comprises displaying a region to which the at least one mobile terminal belongs to be refined as coverage of the region becomes narrower. 10. The method of claim 1, wherein the determining the approximate location information based on the cell information comprises determining location information that matches previous cell information as approximate location information of the at least one mobile terminal in response to absence of current cell information of the at least one mobile terminal. 11. A location information determining system comprising: a memory having computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, receive cell information related to a base station servicing at least one mobile terminal, determine approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information, and transmitting, using the at least one processor, a location-based service to the at least one mobile terminal based on the determined approximate location information, wherein the determining the approximate location information based on the cell information includes, determining at least one neighboring base station adjacent to the base station servicing the at least one mobile terminal based on the cell information, estimating a location of the base station servicing the at least one mobile terminal based on cell identifier information of the at least one neighboring base station, calculating a centroid value of a cell covered by the base station servicing the at least one mobile terminal based on a cell shape corresponding to the estimated location of the base station, and determining location information corresponding to the centroid value of the cell as location information of the at least one mobile terminal. 12. The location information determining system of claim 11, wherein the approximate location information includes location coordinates corresponding to a centroid value of the base station servicing the at least one mobile terminal calculated based on a cell shape of the base station servicing the at least one mobile terminal. 13. The location information determining system of claim 11, wherein the at least one processor is further configured to: determine administrative district information of a region corresponding to the approximate location information. 14. The location information determining system of claim 11, wherein the at least one processor is further configured to: match location information corresponding to a centroid value of the cell and cell information of the base station servicing the at least one mobile terminal; and add the matching information to the location information database. 15. The location information determining system of claim 11, wherein the approximate location information includes a current location coordinates or an approximate location coordinates of the at least one mobile terminal in a cell covered by the base station servicing the at least one mobile terminal. 16. The location information determining system of claim 11, wherein the location-based service is at least one of: a weather information providing service, a discount coupon providing service, an information providing service, a restaurant related information providing service, a financial related service, a music providing service, and a direction providing service. 17. The location information determining system of claim 11, wherein the location information of the at least one mobile terminal is updated based on global positioning system (GPS) information or additional location information of the at least one mobile terminal. 18. A file distribution system for distributing an installation file for installing an application on a mobile terminal of a user, the file distribution system comprising: a memory having computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, store and manage the installation file; and transmit the installation file to the mobile terminal in response to a request from the mobile terminal, wherein the application is configured to, control the mobile terminal to receive cell information related to a base station servicing the mobile terminal, control the mobile terminal to determine approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information, control the mobile terminal to receive a location-based service to the mobile terminal based on the determined approximate location information, and control the mobile terminal to display the determined approximate location information of the mobile terminal and the location-based service, wherein the controlling the mobile terminal to determine the approximate location information based on the cell information includes, controlling the mobile terminal to determine at least one neighboring base station adjacent to the base station servicing the mobile terminal based on the cell information, controlling the mobile terminal to estimate a location of the base station servicing the mobile terminal based on cell identifier information of the at least one neighboring base station, controlling the mobile terminal to calculate a centroid value of a cell covered by the base station servicing the mobile terminal based on a cell shape corresponding to the estimated location of the base station, and controlling the mobile terminal to determine location information corresponding to the centroid value of the cell as location information of the mobile terminal. 19. A mobile terminal, comprising: a memory having computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, connect to a base station associated with a data network, receive cell information from a base station, the cell information including at least one of location area code (LAC) information, identification information of the base station, coverage area information related to the area covered by the base station, transmit a request for a location-based service, the request including the received cell information and a mobile terminal identifier, wherein the request causes a server to, determine approximate location information based on the cell information using a location information database configured to store location information associated with a plurality of cell information; and the at least one processor is further configured to receive a location-based service in response to the request, wherein the determining the approximate location information based on the cell information includes, determining at least one neighboring base station adjacent to the base station servicing the mobile terminal based on the cell information, estimating a location of the base station servicing the mobile terminal based on cell identifier information of the at least one neighboring base station, calculating a centroid value of a cell covered by the base station servicing the mobile terminal based on a cell shape corresponding to the estimated location of the base station, and determining location information corresponding to the centroid value of the cell as location information of the mobile terminal. 20. The mobile terminal of claim 19, wherein the at least one processor is further configured to perform the receive the location-based service by: determining administrative district information based on the received cell information and centroid information stored in the location information database associated with the base station; storing the administrative district information in the location information database in association with the mobile terminal identifier; extracting approximate location information related to the mobile terminal based on the determined gradation level and the stored administrative district information; and receiving the location-based service, the location-based service provided based on the extracted approximate location information. 21. The mobile terminal of claim 19, wherein the location-based service is at least one of a weather information providing service, a discount coupon providing service, an information providing service, a restaurant related information providing service, a financial related service, a music providing service, and a navigation service. 22. The mobile terminal of claim 19, wherein the at least one processor is further configured to: update the cell information upon receiving a user instruction to request the location-based service; and update the cell information when the mobile terminal connects to a second base station.
2,600
10,814
10,814
15,888,643
2,683
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.
1. (canceled) 2. A method for causing an appliance to be placed into an operating state appropriate for a rendering of a media stream by the appliance, comprising: receiving at a device a communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance; and in response to the device receiving the communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance causing the device to automatically transmit at least one command to the appliance wherein the at least one command will cause the appliance to be entered into the operating state appropriate for the rendering of the media stream by the appliance and wherein the operating state appropriate for the rendering of the received media stream by the appliance comprises an input mode operating state of the appliance which allows the media stream, as received by the device, to be communicated to the appliance for rendering by the appliance. 3. The method as recited in claim 2, wherein the operating state appropriate for the rendering of the received media stream by the appliance further comprises a powered on operating state of the appliance. 4. The method as recited in claim 2, wherein the at least one command is communicated to the appliance via use of a wireless communications protocol. 5. The method as recited in claim 2, wherein the wireless communications protocol comprises a radio frequency protocol. 6. The method as recited in claim 2, wherein the communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance comprises a communication received from a remote control device in communication with the device. 7. The method as recited in claim 2, wherein the appliance comprises a television. 8. The method as recited in claim 2, wherein the media stream is wirelessly received by the device from a remote server device. 9. A device adapted to causing an appliance to be placed into an operating state appropriate for a rendering of a media stream by the appliance, comprising: an output port for communicating a received media stream to an input port of the appliance; a processor; and a memory having stored thereon instructions which, when executed by the processor, causes the device to respond to a communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance by automatically transmitting at least one command to the appliance wherein the at least one command will cause the appliance to be entered into the operating state appropriate for the rendering of the media stream by the appliance and wherein the operating state appropriate for the rendering of the received media stream by the appliance comprises an input mode operating state of the appliance which allows the media stream, as received by the device, to be communicated to the appliance for rendering by the appliance. 10. The device as recited in claim 9, wherein the operating state appropriate for the rendering of the received media stream by the appliance further comprises a powered on operating state of the appliance. 11. The device as recited in claim 9, wherein the at least one command is communicated to the appliance via use of a wireless communications protocol. 12. The device as recited in claim 9, wherein the wireless communications protocol comprises a radio frequency protocol. 13. The device as recited in claim 9, wherein the communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance comprises a communication received from a remote control device in communication with the device. 14. The device as recited in claim 9, wherein the appliance comprises a television. 15. The device as recited in claim 9, wherein the media stream is wirelessly received by the device from a remote server device.
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.1. (canceled) 2. A method for causing an appliance to be placed into an operating state appropriate for a rendering of a media stream by the appliance, comprising: receiving at a device a communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance; and in response to the device receiving the communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance causing the device to automatically transmit at least one command to the appliance wherein the at least one command will cause the appliance to be entered into the operating state appropriate for the rendering of the media stream by the appliance and wherein the operating state appropriate for the rendering of the received media stream by the appliance comprises an input mode operating state of the appliance which allows the media stream, as received by the device, to be communicated to the appliance for rendering by the appliance. 3. The method as recited in claim 2, wherein the operating state appropriate for the rendering of the received media stream by the appliance further comprises a powered on operating state of the appliance. 4. The method as recited in claim 2, wherein the at least one command is communicated to the appliance via use of a wireless communications protocol. 5. The method as recited in claim 2, wherein the wireless communications protocol comprises a radio frequency protocol. 6. The method as recited in claim 2, wherein the communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance comprises a communication received from a remote control device in communication with the device. 7. The method as recited in claim 2, wherein the appliance comprises a television. 8. The method as recited in claim 2, wherein the media stream is wirelessly received by the device from a remote server device. 9. A device adapted to causing an appliance to be placed into an operating state appropriate for a rendering of a media stream by the appliance, comprising: an output port for communicating a received media stream to an input port of the appliance; a processor; and a memory having stored thereon instructions which, when executed by the processor, causes the device to respond to a communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance by automatically transmitting at least one command to the appliance wherein the at least one command will cause the appliance to be entered into the operating state appropriate for the rendering of the media stream by the appliance and wherein the operating state appropriate for the rendering of the received media stream by the appliance comprises an input mode operating state of the appliance which allows the media stream, as received by the device, to be communicated to the appliance for rendering by the appliance. 10. The device as recited in claim 9, wherein the operating state appropriate for the rendering of the received media stream by the appliance further comprises a powered on operating state of the appliance. 11. The device as recited in claim 9, wherein the at least one command is communicated to the appliance via use of a wireless communications protocol. 12. The device as recited in claim 9, wherein the wireless communications protocol comprises a radio frequency protocol. 13. The device as recited in claim 9, wherein the communication that functions to indicate to the device that the device is to be used for media rendering purposes with the appliance comprises a communication received from a remote control device in communication with the device. 14. The device as recited in claim 9, wherein the appliance comprises a television. 15. The device as recited in claim 9, wherein the media stream is wirelessly received by the device from a remote server device.
2,600
10,815
10,815
15,891,004
2,664
A method is disclosed for switching connections of Bluetooth headsets. The method includes indicating, on a user interface of the first Bluetooth device, (i) the first Bluetooth device has a first Bluetooth connection with a Bluetooth headset, and (ii) a second Bluetooth device has no Bluetooth connection with the Bluetooth headset. Also, the method includes receiving user input to establish a second Bluetooth connection between the second Bluetooth device and the Bluetooth headset. Further, the method includes sending a message to the Bluetooth headset. The message includes a unique device identifier of the second Bluetooth device. The message represents a command for the Bluetooth headset to establish the second Bluetooth connection with the second Bluetooth device. Responsive to the message, the Bluetooth headset (i) releases the first Bluetooth connection with the first Bluetooth device and (ii) establishes the second Bluetooth connection with the second Bluetooth device.
1. Computer-readable media embodying instructions executable by a processor in a first Bluetooth device to perform functions comprising: indicating, upon a user interface of the first Bluetooth device, a connection status of the first Bluetooth device, a second Bluetooth device, and a Bluetooth headset, wherein the connection status indicates that (i) the first Bluetooth device has a first Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has no Bluetooth connection with the Bluetooth headset; receiving, from the user interface of the first Bluetooth device, a user input to establish a second Bluetooth connection between the second Bluetooth device and the Bluetooth headset; and sending a message from the first Bluetooth device to the Bluetooth headset responsive to receiving the user input, the message including a unique device identifier of the second Bluetooth device, wherein the message represents a command for the Bluetooth headset to establish the second Bluetooth connection with the second Bluetooth device; wherein the Bluetooth headset, responsive to receiving the message, (i) releases the first Bluetooth connection with the first Bluetooth device and then (ii) establishes the second Bluetooth connection with the second Bluetooth device. 2. The computer-readable media of claim 1, wherein the functions further comprise: receiving a status message, wherein the status message indicates that the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset. 3. The computer-readable media of claim 2, wherein the functions further comprise: updating the user interface of the first Bluetooth device to indicate that (i) the first Bluetooth device has no Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset responsive to receiving the status message. 4. The computer-readable media of claim 2, wherein the status message is received from a server. 5. The computer-readable media of claim 1, wherein the unique device identifier of the second Bluetooth device includes a Bluetooth Device Address of the second Bluetooth device. 6. The computer-readable media of claim 5, wherein the functions further comprise: prior to sending the message, receiving the Bluetooth Device Address of the second Bluetooth device from a server. 7. The computer-readable media of claim 1, wherein the message from the first Bluetooth device to the Bluetooth headset is sent over the first Bluetooth connection. 8. A method for connection switching for Bluetooth headsets, comprising: indicating, upon a user interface of a first Bluetooth device, a connection status of the first Bluetooth device, a second Bluetooth device, and a Bluetooth headset, wherein the connection status indicates that (i) the first Bluetooth device has a first Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has no Bluetooth connection with the Bluetooth headset; receiving, from the user interface of the first Bluetooth device, a user input to establish a second Bluetooth connection between the second Bluetooth device and the Bluetooth headset; and sending a message from the first Bluetooth device to the Bluetooth headset responsive to receiving the user input, the message including a unique device identifier of the second Bluetooth device, wherein the message represents a command for the Bluetooth headset to establish the second Bluetooth connection with the second Bluetooth device; wherein the Bluetooth headset, responsive to receiving the message, (i) releases the first Bluetooth connection with the first Bluetooth device and then (ii) establishes the second Bluetooth connection with the second Bluetooth device. 9. The method of claim 8, comprising: receiving a status message, wherein the status message indicates that the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset. 10. The method of claim 9, comprising: updating the user interface of the first Bluetooth device to indicate that (i) the first Bluetooth device has no Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset responsive to receiving the status message. 11. The method of claim 9, wherein the status message is received from a server. 12. The method of claim 8, wherein the unique device identifier of the second Bluetooth device includes a Bluetooth Device Address of the second Bluetooth device. 13. The method of claim 12, comprising: prior to sending the message, receiving the Bluetooth Device Address of the second Bluetooth device from a server. 14. The method of claim 8, wherein the message from the first Bluetooth device to the Bluetooth headset is sent over the first Bluetooth connection.
A method is disclosed for switching connections of Bluetooth headsets. The method includes indicating, on a user interface of the first Bluetooth device, (i) the first Bluetooth device has a first Bluetooth connection with a Bluetooth headset, and (ii) a second Bluetooth device has no Bluetooth connection with the Bluetooth headset. Also, the method includes receiving user input to establish a second Bluetooth connection between the second Bluetooth device and the Bluetooth headset. Further, the method includes sending a message to the Bluetooth headset. The message includes a unique device identifier of the second Bluetooth device. The message represents a command for the Bluetooth headset to establish the second Bluetooth connection with the second Bluetooth device. Responsive to the message, the Bluetooth headset (i) releases the first Bluetooth connection with the first Bluetooth device and (ii) establishes the second Bluetooth connection with the second Bluetooth device.1. Computer-readable media embodying instructions executable by a processor in a first Bluetooth device to perform functions comprising: indicating, upon a user interface of the first Bluetooth device, a connection status of the first Bluetooth device, a second Bluetooth device, and a Bluetooth headset, wherein the connection status indicates that (i) the first Bluetooth device has a first Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has no Bluetooth connection with the Bluetooth headset; receiving, from the user interface of the first Bluetooth device, a user input to establish a second Bluetooth connection between the second Bluetooth device and the Bluetooth headset; and sending a message from the first Bluetooth device to the Bluetooth headset responsive to receiving the user input, the message including a unique device identifier of the second Bluetooth device, wherein the message represents a command for the Bluetooth headset to establish the second Bluetooth connection with the second Bluetooth device; wherein the Bluetooth headset, responsive to receiving the message, (i) releases the first Bluetooth connection with the first Bluetooth device and then (ii) establishes the second Bluetooth connection with the second Bluetooth device. 2. The computer-readable media of claim 1, wherein the functions further comprise: receiving a status message, wherein the status message indicates that the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset. 3. The computer-readable media of claim 2, wherein the functions further comprise: updating the user interface of the first Bluetooth device to indicate that (i) the first Bluetooth device has no Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset responsive to receiving the status message. 4. The computer-readable media of claim 2, wherein the status message is received from a server. 5. The computer-readable media of claim 1, wherein the unique device identifier of the second Bluetooth device includes a Bluetooth Device Address of the second Bluetooth device. 6. The computer-readable media of claim 5, wherein the functions further comprise: prior to sending the message, receiving the Bluetooth Device Address of the second Bluetooth device from a server. 7. The computer-readable media of claim 1, wherein the message from the first Bluetooth device to the Bluetooth headset is sent over the first Bluetooth connection. 8. A method for connection switching for Bluetooth headsets, comprising: indicating, upon a user interface of a first Bluetooth device, a connection status of the first Bluetooth device, a second Bluetooth device, and a Bluetooth headset, wherein the connection status indicates that (i) the first Bluetooth device has a first Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has no Bluetooth connection with the Bluetooth headset; receiving, from the user interface of the first Bluetooth device, a user input to establish a second Bluetooth connection between the second Bluetooth device and the Bluetooth headset; and sending a message from the first Bluetooth device to the Bluetooth headset responsive to receiving the user input, the message including a unique device identifier of the second Bluetooth device, wherein the message represents a command for the Bluetooth headset to establish the second Bluetooth connection with the second Bluetooth device; wherein the Bluetooth headset, responsive to receiving the message, (i) releases the first Bluetooth connection with the first Bluetooth device and then (ii) establishes the second Bluetooth connection with the second Bluetooth device. 9. The method of claim 8, comprising: receiving a status message, wherein the status message indicates that the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset. 10. The method of claim 9, comprising: updating the user interface of the first Bluetooth device to indicate that (i) the first Bluetooth device has no Bluetooth connection with the Bluetooth headset and (ii) the second Bluetooth device has the second Bluetooth connection with the Bluetooth headset responsive to receiving the status message. 11. The method of claim 9, wherein the status message is received from a server. 12. The method of claim 8, wherein the unique device identifier of the second Bluetooth device includes a Bluetooth Device Address of the second Bluetooth device. 13. The method of claim 12, comprising: prior to sending the message, receiving the Bluetooth Device Address of the second Bluetooth device from a server. 14. The method of claim 8, wherein the message from the first Bluetooth device to the Bluetooth headset is sent over the first Bluetooth connection.
2,600
10,816
10,816
15,839,775
2,648
The present application provides an antenna system for use in an electronic device. The antenna system includes a conductive substrate having a width, which corresponds to the distance between two opposite side edges of the conductive substrate proximate one end of the device. The antenna system further includes a pair of conductive arms, where each conductive arm in the pair of conductive arms has a connected end, which couples to the conductive substrate at alternative ones of the opposite side edges of the conductive substrate proximate the one end of the device. Each conductive arm further has an open end which extends away from the respective coupled side edge toward the other one of the opposite side edges in a direction of extension. The open ends of the conductive arms in the pair extend toward one another, stopping short of touching or overlapping the other conductive arm in the pair in the direction of extension away from the respective coupled side edge. Correspondingly, a gap is present between the respective open ends of the pair of conductive arms. A signal source is coupled to each of conductive arms proximate the respective open ends of the pair of conductive arms for supplying a signal. The signal source is coupled to at least one of the conductive arms via a respective feed line conductor, where the feed line conductor, that is coupled to the open end of the at least one of the pair of conductive arms, extends in the direction of extension which traverses at least a portion of the gap between the open ends of the conductive arms.
1. An antenna system for use in an electronic device, the antenna system comprising: a conductive substrate having a width, which corresponds to the distance between two opposite side edges of the conductive substrate proximate one end of the device; a pair of conductive arms, where each conductive arm in the pair of conductive arms has a connected end, which couples to the conductive substrate at alternative ones of the opposite side edges of the conductive substrate proximate the one end of the device, and an open end which extends away from the respective coupled side edge toward the other one of the opposite side edges in a direction of extension, where the open ends of the conductive arms in the pair extend toward one another, stopping short of touching or overlapping the other conductive arm in the pair in the direction of extension away from the respective coupled side edge, thereby forming a gap between the respective open ends of the pair of conductive arms; a signal source coupled to each of conductive arms proximate the respective open ends of the pair of conductive arms for supplying a signal, wherein the signal source is coupled to at least one of the conductive arms via a respective feed line conductor, where the feed line conductor coupled to the open end of the at least one of the pair of conductive arms extends in the direction of extension which traverses at least a portion of the gap between the open ends of the conductive arms. 2. An antenna system in accordance with claim 1, wherein the respective feed line conductor traverses the gap between the open ends of the conductive arms. 3. An antenna system in accordance with claim 2, wherein in addition to traversing the gap between the open ends of the conductive arms, the respective feed line conductor overlaps at least a portion of the conductive arm not coupled to the respective feed line conductor along the direction of extension that the conductive arms extend towards one another away from the respective coupled side edge. 4. An antenna system in accordance with claim 1, wherein the signal source is coupled to each of the conductive arms in the pair of conductive arms via a pair of respective feed line conductors, where each of the respective feed line conductors coupled to the open end of the respective one of the pair of conductive arms extends in the direction of extension which traverses at least a portion of the gap between the open ends of the conductive arms. 5. An antenna system in accordance with claim 4, wherein each of the respective feed line conductor traverses the gap between the open ends of the conductive arms. 6. An antenna system in accordance with claim 5, wherein in addition to traversing the gap between the open ends of the conductive arms, each of the respective feed line conductors overlaps at least a portion of the conductive arm not coupled to the respective feed line conductor along the direction of extension that the conductive arms extend towards one another away from the respective coupled side edge. 7. An antenna system in accordance with claim 1, wherein the two side edges of the conductive substrate are each respectively associated with a corresponding edge of the electronic device. 8. An antenna system in accordance with claim 1, wherein the conductive substrate includes at least part of a ground plane of a circuit substrate. 9. An antenna system in accordance with claim 1, wherein the pair of conductive arms are located at or near a top of a housing for the electronic device. 10. An antenna system in accordance with claim 9, wherein the housing of the electronic device is conductive and the pair of conductive arms are formed as part of the conductive housing. 11. An antenna system in accordance with claim 9, further comprising a second pair of conductive arms located at or near a bottom of the housing for the electronic device. 12. An antenna system in accordance with claim 1, wherein at least part of one or more of the feed line conductors are conductive traces formed as part of a circuit substrate. 13. An antenna system in accordance with claim 1, wherein the signal being supplied to the pair of conductive arms by the signal source includes a pair of respective signals that are substantially 180 degrees out of phase. 14. An antenna system in accordance with claim 1, wherein the signal being supplied to the pair of conductive arms by the signal source includes a pair of respective signals that have an opposite polarity. 15. An antenna system in accordance with claim 1, wherein each of the respective conductive arms and any associated feed line conductor, in addition to the conductive substrate, form a loop. 16. An antenna system in accordance with claim 15, wherein each of the loops occupy a portion of a window located between a top edge of the conductive substrate and the pair of conductive arms. 17. An antenna system in accordance with claim 15, wherein when the signal being supplied to each of the pair of conductive arms is supplied via a respective feed line conductor, and the respective feed line conductors overlap, the respective conductive arms, the associated feed lines, and the conductive substrate each form a respective loop, which at least partially overlaps with the other one of the respective loops. 18. An antenna system in accordance with claim 17, wherein the signal being supplied to the pair of conductive arms by the signal source includes a pair of respective signals that are of an opposite polarity, so as to produce a current in the overlapping portions of the respective feed line conductors, that flow in the same direction. 19. An antenna system in accordance with claim 1, wherein the antenna system is adapted for producing a wireless near field communication signal. 20. A antenna system in accordance with claim 1, wherein the electronic device is a hand held cellular radiotelephone.
The present application provides an antenna system for use in an electronic device. The antenna system includes a conductive substrate having a width, which corresponds to the distance between two opposite side edges of the conductive substrate proximate one end of the device. The antenna system further includes a pair of conductive arms, where each conductive arm in the pair of conductive arms has a connected end, which couples to the conductive substrate at alternative ones of the opposite side edges of the conductive substrate proximate the one end of the device. Each conductive arm further has an open end which extends away from the respective coupled side edge toward the other one of the opposite side edges in a direction of extension. The open ends of the conductive arms in the pair extend toward one another, stopping short of touching or overlapping the other conductive arm in the pair in the direction of extension away from the respective coupled side edge. Correspondingly, a gap is present between the respective open ends of the pair of conductive arms. A signal source is coupled to each of conductive arms proximate the respective open ends of the pair of conductive arms for supplying a signal. The signal source is coupled to at least one of the conductive arms via a respective feed line conductor, where the feed line conductor, that is coupled to the open end of the at least one of the pair of conductive arms, extends in the direction of extension which traverses at least a portion of the gap between the open ends of the conductive arms.1. An antenna system for use in an electronic device, the antenna system comprising: a conductive substrate having a width, which corresponds to the distance between two opposite side edges of the conductive substrate proximate one end of the device; a pair of conductive arms, where each conductive arm in the pair of conductive arms has a connected end, which couples to the conductive substrate at alternative ones of the opposite side edges of the conductive substrate proximate the one end of the device, and an open end which extends away from the respective coupled side edge toward the other one of the opposite side edges in a direction of extension, where the open ends of the conductive arms in the pair extend toward one another, stopping short of touching or overlapping the other conductive arm in the pair in the direction of extension away from the respective coupled side edge, thereby forming a gap between the respective open ends of the pair of conductive arms; a signal source coupled to each of conductive arms proximate the respective open ends of the pair of conductive arms for supplying a signal, wherein the signal source is coupled to at least one of the conductive arms via a respective feed line conductor, where the feed line conductor coupled to the open end of the at least one of the pair of conductive arms extends in the direction of extension which traverses at least a portion of the gap between the open ends of the conductive arms. 2. An antenna system in accordance with claim 1, wherein the respective feed line conductor traverses the gap between the open ends of the conductive arms. 3. An antenna system in accordance with claim 2, wherein in addition to traversing the gap between the open ends of the conductive arms, the respective feed line conductor overlaps at least a portion of the conductive arm not coupled to the respective feed line conductor along the direction of extension that the conductive arms extend towards one another away from the respective coupled side edge. 4. An antenna system in accordance with claim 1, wherein the signal source is coupled to each of the conductive arms in the pair of conductive arms via a pair of respective feed line conductors, where each of the respective feed line conductors coupled to the open end of the respective one of the pair of conductive arms extends in the direction of extension which traverses at least a portion of the gap between the open ends of the conductive arms. 5. An antenna system in accordance with claim 4, wherein each of the respective feed line conductor traverses the gap between the open ends of the conductive arms. 6. An antenna system in accordance with claim 5, wherein in addition to traversing the gap between the open ends of the conductive arms, each of the respective feed line conductors overlaps at least a portion of the conductive arm not coupled to the respective feed line conductor along the direction of extension that the conductive arms extend towards one another away from the respective coupled side edge. 7. An antenna system in accordance with claim 1, wherein the two side edges of the conductive substrate are each respectively associated with a corresponding edge of the electronic device. 8. An antenna system in accordance with claim 1, wherein the conductive substrate includes at least part of a ground plane of a circuit substrate. 9. An antenna system in accordance with claim 1, wherein the pair of conductive arms are located at or near a top of a housing for the electronic device. 10. An antenna system in accordance with claim 9, wherein the housing of the electronic device is conductive and the pair of conductive arms are formed as part of the conductive housing. 11. An antenna system in accordance with claim 9, further comprising a second pair of conductive arms located at or near a bottom of the housing for the electronic device. 12. An antenna system in accordance with claim 1, wherein at least part of one or more of the feed line conductors are conductive traces formed as part of a circuit substrate. 13. An antenna system in accordance with claim 1, wherein the signal being supplied to the pair of conductive arms by the signal source includes a pair of respective signals that are substantially 180 degrees out of phase. 14. An antenna system in accordance with claim 1, wherein the signal being supplied to the pair of conductive arms by the signal source includes a pair of respective signals that have an opposite polarity. 15. An antenna system in accordance with claim 1, wherein each of the respective conductive arms and any associated feed line conductor, in addition to the conductive substrate, form a loop. 16. An antenna system in accordance with claim 15, wherein each of the loops occupy a portion of a window located between a top edge of the conductive substrate and the pair of conductive arms. 17. An antenna system in accordance with claim 15, wherein when the signal being supplied to each of the pair of conductive arms is supplied via a respective feed line conductor, and the respective feed line conductors overlap, the respective conductive arms, the associated feed lines, and the conductive substrate each form a respective loop, which at least partially overlaps with the other one of the respective loops. 18. An antenna system in accordance with claim 17, wherein the signal being supplied to the pair of conductive arms by the signal source includes a pair of respective signals that are of an opposite polarity, so as to produce a current in the overlapping portions of the respective feed line conductors, that flow in the same direction. 19. An antenna system in accordance with claim 1, wherein the antenna system is adapted for producing a wireless near field communication signal. 20. A antenna system in accordance with claim 1, wherein the electronic device is a hand held cellular radiotelephone.
2,600
10,817
10,817
16,276,820
2,684
A method for interleaving time slots in a multi-antenna system for communication with RFID tags is described. An exemplary system has a first RFID interrogator and first and second antennas. The first and second antennas direct signals to and receive signals from respective first and second interrogation zones. A first interrogation signal is transmitted to the first antenna. A first acquire window for receiving a signal from a first RFID transponder is opened after the first interrogation signal. A second interrogation signal is transmitted to the second antenna after the first interrogation signal, and a second acquire window for receiving a signal from a second RFID transponder is opened after the second interrogation signal.
1. An RFID interrogation system comprising: a first RFID interrogator; a first antenna; and a second antenna, wherein the first and second antennas are located to direct signals to and to receive signals from respective first and second interrogation zones, wherein a first interrogation signal from the first RFID interrogator is transmitted to the first antenna, wherein a first acquire window for receiving a signal from a first RFID transponder is opened after said first interrogation signal; wherein a second interrogation signal from the first RFID interrogator or from a second RFID interrogator is transmitted to the second antenna, wherein the second interrogation signal is transmitted after the first interrogation signal, wherein a second acquire window for receiving a signal from a second RFID transponder is opened after the second interrogation signal. 2. The system of claim 1, wherein said system is configured to assign a signal reception occurring during said first acquire window as a response to said first interrogation signal. 3. The system of claim 1, wherein said system is configured to assign a signal reception occurring during said second acquire window as a response to said second interrogation signal. 4. The system of claim 1 wherein a third antenna is located between said first and second antennas and is located to direct third interrogation signals to and to receive third signals from a third interrogation zone, wherein said third interrogation signals do not overlap in time with said first interrogation signal or said second interrogation signal. 5. The system of claim 4 wherein a fourth antenna is located adjacent to said second antenna, but not adjacent to said third antenna and is located to direct fourth interrogation signals to and to receive fourth signals from a fourth interrogation zone, wherein said fourth interrogation signals do not overlap in time with said first interrogation signal, said second interrogation signal. 6. The system of claim 5 wherein a fifth antenna is located between said second and third antennas and is located to direct fifth interrogation signals to and to receive fifth signals from a fifth interrogation zone, wherein said fifth interrogation signals do not overlap in time with said first interrogation signal or second interrogation signal. 7. The system of claim 6 wherein a sixth antenna is located adjacent to said fourth antenna, but not adjacent to said second antenna and is located to direct sixth interrogation signals to and to receive sixth signals from a sixth interrogation zone, wherein said sixth interrogation signals do not overlap in time with said first downlink signal or said second downlink signal. 8. The system of claim 1, wherein said first and second interrogation zones are close enough to each other that an RFID transponder in either of said first or second interrogation zones can receive both of said first and second interrogation signals. 9. The system of claim 1, wherein said first and second antennas are separated by additional antennas and wherein said additional antennas transmit interrogation signals at different times than said first and second interrogation intervals.
A method for interleaving time slots in a multi-antenna system for communication with RFID tags is described. An exemplary system has a first RFID interrogator and first and second antennas. The first and second antennas direct signals to and receive signals from respective first and second interrogation zones. A first interrogation signal is transmitted to the first antenna. A first acquire window for receiving a signal from a first RFID transponder is opened after the first interrogation signal. A second interrogation signal is transmitted to the second antenna after the first interrogation signal, and a second acquire window for receiving a signal from a second RFID transponder is opened after the second interrogation signal.1. An RFID interrogation system comprising: a first RFID interrogator; a first antenna; and a second antenna, wherein the first and second antennas are located to direct signals to and to receive signals from respective first and second interrogation zones, wherein a first interrogation signal from the first RFID interrogator is transmitted to the first antenna, wherein a first acquire window for receiving a signal from a first RFID transponder is opened after said first interrogation signal; wherein a second interrogation signal from the first RFID interrogator or from a second RFID interrogator is transmitted to the second antenna, wherein the second interrogation signal is transmitted after the first interrogation signal, wherein a second acquire window for receiving a signal from a second RFID transponder is opened after the second interrogation signal. 2. The system of claim 1, wherein said system is configured to assign a signal reception occurring during said first acquire window as a response to said first interrogation signal. 3. The system of claim 1, wherein said system is configured to assign a signal reception occurring during said second acquire window as a response to said second interrogation signal. 4. The system of claim 1 wherein a third antenna is located between said first and second antennas and is located to direct third interrogation signals to and to receive third signals from a third interrogation zone, wherein said third interrogation signals do not overlap in time with said first interrogation signal or said second interrogation signal. 5. The system of claim 4 wherein a fourth antenna is located adjacent to said second antenna, but not adjacent to said third antenna and is located to direct fourth interrogation signals to and to receive fourth signals from a fourth interrogation zone, wherein said fourth interrogation signals do not overlap in time with said first interrogation signal, said second interrogation signal. 6. The system of claim 5 wherein a fifth antenna is located between said second and third antennas and is located to direct fifth interrogation signals to and to receive fifth signals from a fifth interrogation zone, wherein said fifth interrogation signals do not overlap in time with said first interrogation signal or second interrogation signal. 7. The system of claim 6 wherein a sixth antenna is located adjacent to said fourth antenna, but not adjacent to said second antenna and is located to direct sixth interrogation signals to and to receive sixth signals from a sixth interrogation zone, wherein said sixth interrogation signals do not overlap in time with said first downlink signal or said second downlink signal. 8. The system of claim 1, wherein said first and second interrogation zones are close enough to each other that an RFID transponder in either of said first or second interrogation zones can receive both of said first and second interrogation signals. 9. The system of claim 1, wherein said first and second antennas are separated by additional antennas and wherein said additional antennas transmit interrogation signals at different times than said first and second interrogation intervals.
2,600
10,818
10,818
15,285,857
2,631
Embodiments include systems, methods, and computer program products for tracking and sensing medical assets. Systems include a plurality of long range transmitters. Systems also include a medical asset box including a medical asset, a radio frequency ID microchip in proximity to the medical asset, and an extended antenna that is capable of receiving a signal from the radio frequency ID microchip and transmitting the signal to an external device.
1. A system for tracking medical assets, the system comprising: a plurality of long range transmitters in communication with a wireless sensor network and a medical asset box, the medical asset box comprising: a medical asset comprising a medical device enclosed within the medical asset box; a medical asset tag comprising radio frequency ID microchip in proximity to the medical asset; and an extended antenna positioned on an exterior of the medical asset box, wherein the extended antenna is capable of receiving a signal from the radio frequency ID microchip and transmitting the signal to an external device via the plurality of long range transmitters. 2. The system of claim 1, wherein the wireless sensor network comprises a cellular, Wi-Fi, or Ethernet connection in communication with an external device. 3. The system of claim 1, wherein the wireless sensor network is a closed network independent from a facility network. 4. The system of claim 1, wherein the radio frequency ID microchip is positioned within an interior of the medical asset box. 5. The system of claim 1, wherein the radio frequency ID microchip is positioned on an exterior of the medical asset box. 6. The system of claim 1, wherein the antenna comprises an omni-directional antenna. 7. The system of claim 1, wherein the radio frequency ID microchip is attached to the medical asset. 8. The system of claim 1, wherein the radio frequency ID microchip is water resistant. 9. The system of claim 1 further comprising a controller box. 10. The system of claim 9, wherein the controller box comprises: a tag manager in communication with the medical asset box capable of being programmed to receive data continuously or periodically; a battery providing power to the tag manager; and a wireless gateway in communication with the tag manager and an external device. 11. (canceled) 12. The system of claim 10, wherein the wireless gateway comprises a cellular, Wi-Fi, or Ethernet connection. 13. The system of claim 10, wherein the battery is hermetically sealed. 14. The system of claim 10, wherein the battery is rechargeable. 15. A medical asset tag for tracking medical assets, the medical asset tag comprising: a radio frequency signal generated by an RF ID attached to a medical asset, wherein the radio frequency signal is in proximity to and in communication with an extended antenna on a medical asset box in communication with a wireless sensor network, and wherein the radio frequency signal transmits a medical asset data packet comprising a unique identifier for the medical asset to the antenna; a high temperature battery positioned within the RF ID for supplying power to the medical asset tag; a motion sensor positioned within the RF ID for generating a RF ID location; a control circuit in communication with the RF ID and the high temperature battery, wherein the control circuit is capable of associating the unique identifier with the RF ID location to generate the medical asset data packet; and a memory in communication with the control circuit. 16. The medical asset tag of claim 15, wherein the extended antenna communicates with an external device through the wireless sensor network. 17. A computer program product for tracking medical assets, a computer readable storage medium having program instructions embodied therewith, wherein the instructions are executable by a processor to cause the processor to perform a method comprising: receiving a unique identifier from a radio frequency signal of a medical asset tag affixed to a medical asset comprising a medical device enclosed within a medical asset box, wherein the unique identifier uniquely identifies the medical asset; associating the unique identifier with a medical asset location to generate a medical asset data packet, wherein the medical asset location is determined based upon the presence of a signal or a strength of the signal received to a long range transmitter from the medical asset box; and outputting the medical asset data packet to a gateway. 18. The computer program product of claim 17, wherein the method comprises encrypting the medical asset data packet. 19. The computer program product of claim 17, wherein the medical asset data packet comprises motion sensor data. 20. The computer program product of claim 17, wherein software is provided as a service in a cloud environment. 21. The system of claim 1, wherein the medical asset comprises a surgical instrument or implant component.
Embodiments include systems, methods, and computer program products for tracking and sensing medical assets. Systems include a plurality of long range transmitters. Systems also include a medical asset box including a medical asset, a radio frequency ID microchip in proximity to the medical asset, and an extended antenna that is capable of receiving a signal from the radio frequency ID microchip and transmitting the signal to an external device.1. A system for tracking medical assets, the system comprising: a plurality of long range transmitters in communication with a wireless sensor network and a medical asset box, the medical asset box comprising: a medical asset comprising a medical device enclosed within the medical asset box; a medical asset tag comprising radio frequency ID microchip in proximity to the medical asset; and an extended antenna positioned on an exterior of the medical asset box, wherein the extended antenna is capable of receiving a signal from the radio frequency ID microchip and transmitting the signal to an external device via the plurality of long range transmitters. 2. The system of claim 1, wherein the wireless sensor network comprises a cellular, Wi-Fi, or Ethernet connection in communication with an external device. 3. The system of claim 1, wherein the wireless sensor network is a closed network independent from a facility network. 4. The system of claim 1, wherein the radio frequency ID microchip is positioned within an interior of the medical asset box. 5. The system of claim 1, wherein the radio frequency ID microchip is positioned on an exterior of the medical asset box. 6. The system of claim 1, wherein the antenna comprises an omni-directional antenna. 7. The system of claim 1, wherein the radio frequency ID microchip is attached to the medical asset. 8. The system of claim 1, wherein the radio frequency ID microchip is water resistant. 9. The system of claim 1 further comprising a controller box. 10. The system of claim 9, wherein the controller box comprises: a tag manager in communication with the medical asset box capable of being programmed to receive data continuously or periodically; a battery providing power to the tag manager; and a wireless gateway in communication with the tag manager and an external device. 11. (canceled) 12. The system of claim 10, wherein the wireless gateway comprises a cellular, Wi-Fi, or Ethernet connection. 13. The system of claim 10, wherein the battery is hermetically sealed. 14. The system of claim 10, wherein the battery is rechargeable. 15. A medical asset tag for tracking medical assets, the medical asset tag comprising: a radio frequency signal generated by an RF ID attached to a medical asset, wherein the radio frequency signal is in proximity to and in communication with an extended antenna on a medical asset box in communication with a wireless sensor network, and wherein the radio frequency signal transmits a medical asset data packet comprising a unique identifier for the medical asset to the antenna; a high temperature battery positioned within the RF ID for supplying power to the medical asset tag; a motion sensor positioned within the RF ID for generating a RF ID location; a control circuit in communication with the RF ID and the high temperature battery, wherein the control circuit is capable of associating the unique identifier with the RF ID location to generate the medical asset data packet; and a memory in communication with the control circuit. 16. The medical asset tag of claim 15, wherein the extended antenna communicates with an external device through the wireless sensor network. 17. A computer program product for tracking medical assets, a computer readable storage medium having program instructions embodied therewith, wherein the instructions are executable by a processor to cause the processor to perform a method comprising: receiving a unique identifier from a radio frequency signal of a medical asset tag affixed to a medical asset comprising a medical device enclosed within a medical asset box, wherein the unique identifier uniquely identifies the medical asset; associating the unique identifier with a medical asset location to generate a medical asset data packet, wherein the medical asset location is determined based upon the presence of a signal or a strength of the signal received to a long range transmitter from the medical asset box; and outputting the medical asset data packet to a gateway. 18. The computer program product of claim 17, wherein the method comprises encrypting the medical asset data packet. 19. The computer program product of claim 17, wherein the medical asset data packet comprises motion sensor data. 20. The computer program product of claim 17, wherein software is provided as a service in a cloud environment. 21. The system of claim 1, wherein the medical asset comprises a surgical instrument or implant component.
2,600
10,819
10,819
13,530,659
2,625
A computer input device, such as a mouse, has a surface movement sensor in communication with a processing circuit for providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface, and one or more touchless sensor subsystems in communication with the processing circuit for providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device. A transmission circuit under control of the processing circuit issues transmissions to a computer representative of the first and second signals.
1. A computer input device, comprising: a housing in which is carried a processing circuit; a memory having instructions for controlling operations of the processing circuit; a surface movement sensor in communication with the processing circuit providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface; a first touchless sensor subsystem in communication with the processing circuit providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device; and a transmission circuit under control of the processing circuit for issuing transmission to a computer representative of the first and second signals. 2. The computer input device as recited in claim 1, comprising a second touchless sensor subsystem in communication with the processing circuit providing to the processing circuit third signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the third signals. 3. The computer input device as recited in claim 2, wherein the first and second touchless sensor subsystems are disposed on opposites sides of the housing of the computer input device. 4. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are optical sensing subsystems. 5. The computer input device as recited in claim 4, wherein light is generated for used by the first and second touchless sensor subsystems from a source of light energy external to the first and second touchless sensor subsystems. 6. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are thermal sensing subsystems. 7. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are sound sensing subsystems. 8. The computer input device as recited in claim 4, comprising one or more buttons carried on the housing and providing to the processing circuit fourth signals indicative of a sensed interaction with the one or more buttons and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fourth signals. 9. The computer input device as recited in claim 8, comprising a scroll wheel carried on the housing and providing to the processing circuit fifth signals indicative of a sensed interaction with the scroll wheel and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fifth signals. 10. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an RF protocol. 11. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an IR protocol.
A computer input device, such as a mouse, has a surface movement sensor in communication with a processing circuit for providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface, and one or more touchless sensor subsystems in communication with the processing circuit for providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device. A transmission circuit under control of the processing circuit issues transmissions to a computer representative of the first and second signals.1. A computer input device, comprising: a housing in which is carried a processing circuit; a memory having instructions for controlling operations of the processing circuit; a surface movement sensor in communication with the processing circuit providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface; a first touchless sensor subsystem in communication with the processing circuit providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device; and a transmission circuit under control of the processing circuit for issuing transmission to a computer representative of the first and second signals. 2. The computer input device as recited in claim 1, comprising a second touchless sensor subsystem in communication with the processing circuit providing to the processing circuit third signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the third signals. 3. The computer input device as recited in claim 2, wherein the first and second touchless sensor subsystems are disposed on opposites sides of the housing of the computer input device. 4. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are optical sensing subsystems. 5. The computer input device as recited in claim 4, wherein light is generated for used by the first and second touchless sensor subsystems from a source of light energy external to the first and second touchless sensor subsystems. 6. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are thermal sensing subsystems. 7. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are sound sensing subsystems. 8. The computer input device as recited in claim 4, comprising one or more buttons carried on the housing and providing to the processing circuit fourth signals indicative of a sensed interaction with the one or more buttons and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fourth signals. 9. The computer input device as recited in claim 8, comprising a scroll wheel carried on the housing and providing to the processing circuit fifth signals indicative of a sensed interaction with the scroll wheel and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fifth signals. 10. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an RF protocol. 11. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an IR protocol.
2,600
10,820
10,820
12,361,072
2,625
A computer input device, such as a mouse, has a processing circuit, a memory having instructions for controlling operations of the processing circuit, a surface movement sensor in communication with the processing circuit providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface, and one or more touchless sensor subsystems in communication with the processing circuit providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device. A transmission circuit under control of the processing circuit issues transmissions to a computer representative of the first and second signals to cause regions or locations on a computer display screen to be pointed to, to cause information which is represented on the computer display screen to be moved and/or selected, to cause locations on the computer display screen to be designated, etc.
1. A computer input device, comprising: a housing in which is carried a processing circuit; a memory having instructions for controlling operations of the processing circuit; a surface movement sensor in communication with the processing circuit providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface; a first touchless sensor subsystem in communication with the processing circuit providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device; and a transmission circuit under control of the processing circuit for issuing transmission to a computer representative of the first and second signals. 2. The computer input device as recited in claim 1, comprising a second touchless sensor subsystem in communication with the processing circuit providing to the processing circuit third signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the third signals. 3. The computer input device as recited in claim 2, wherein the first and second touchless sensor subsystems are disposed on opposites sides of the housing of the computer input device. 4. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are optical sensing subsystems. 5. The computer input device as recited in claim 4, wherein light is generated for used by the first and second touchless sensor subsystems from a source of light energy external to the first and second touchless sensor subsystems. 6. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are thermal sensing subsystems. 7. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are sound sensing subsystems. 8. The computer input device as recited in claim 4, comprising one or more buttons carried on the housing and providing to the processing circuit fourth signals indicative of a sensed interaction with the one or more buttons and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fourth signals. 9. The computer input device as recited in claim 8, comprising a scroll wheel carried on the housing and providing to the processing circuit fifth signals indicative of a sensed interaction with the scroll wheel and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fifth signals. 10. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an RF protocol. 11. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an IR protocol. 12. A computer input device, comprising: a housing in which is carried a processing circuit; a memory having instructions for controlling operations of the processing circuit; first and second touchless sensor subsystems in communication with the processing circuit providing to the processing circuit signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device; and a transmission circuit under control of the processing circuit for issuing transmission to a computer representative of the signals. 13. The computer input device as recited in claim 12, wherein the first and second touchless sensor subsystems are disposed on opposites sides of the housing of the computer input device. 14. The computer input device as recited in claim 13, wherein the first and second touchless sensor subsystems are optical sensing subsystems. 15. The computer input device as recited in claim 14, wherein light is generated for used by the first and second touchless sensor subsystems from a source of light energy external to the first and second touchless sensor subsystems. 16. The computer input device as recited in claim 13, wherein the first and second touchless sensor subsystems are thermal sensing subsystems. 17. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are sound sensing subsystems.
A computer input device, such as a mouse, has a processing circuit, a memory having instructions for controlling operations of the processing circuit, a surface movement sensor in communication with the processing circuit providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface, and one or more touchless sensor subsystems in communication with the processing circuit providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device. A transmission circuit under control of the processing circuit issues transmissions to a computer representative of the first and second signals to cause regions or locations on a computer display screen to be pointed to, to cause information which is represented on the computer display screen to be moved and/or selected, to cause locations on the computer display screen to be designated, etc.1. A computer input device, comprising: a housing in which is carried a processing circuit; a memory having instructions for controlling operations of the processing circuit; a surface movement sensor in communication with the processing circuit providing to the processing circuit first signals indicative of sensed movement of the computer input device upon a surface; a first touchless sensor subsystem in communication with the processing circuit providing to the processing circuit second signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device; and a transmission circuit under control of the processing circuit for issuing transmission to a computer representative of the first and second signals. 2. The computer input device as recited in claim 1, comprising a second touchless sensor subsystem in communication with the processing circuit providing to the processing circuit third signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the third signals. 3. The computer input device as recited in claim 2, wherein the first and second touchless sensor subsystems are disposed on opposites sides of the housing of the computer input device. 4. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are optical sensing subsystems. 5. The computer input device as recited in claim 4, wherein light is generated for used by the first and second touchless sensor subsystems from a source of light energy external to the first and second touchless sensor subsystems. 6. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are thermal sensing subsystems. 7. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are sound sensing subsystems. 8. The computer input device as recited in claim 4, comprising one or more buttons carried on the housing and providing to the processing circuit fourth signals indicative of a sensed interaction with the one or more buttons and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fourth signals. 9. The computer input device as recited in claim 8, comprising a scroll wheel carried on the housing and providing to the processing circuit fifth signals indicative of a sensed interaction with the scroll wheel and the transmission circuit under control of the processing circuit further issues transmissions to a computer representative of the fifth signals. 10. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an RF protocol. 11. The computer input device as recited in claim 1, wherein the transmission circuit transmits signals to a computer using an IR protocol. 12. A computer input device, comprising: a housing in which is carried a processing circuit; a memory having instructions for controlling operations of the processing circuit; first and second touchless sensor subsystems in communication with the processing circuit providing to the processing circuit signals indicative of sensed surface movements relative to the computer input device occurring in spaced proximity to the computer input device; and a transmission circuit under control of the processing circuit for issuing transmission to a computer representative of the signals. 13. The computer input device as recited in claim 12, wherein the first and second touchless sensor subsystems are disposed on opposites sides of the housing of the computer input device. 14. The computer input device as recited in claim 13, wherein the first and second touchless sensor subsystems are optical sensing subsystems. 15. The computer input device as recited in claim 14, wherein light is generated for used by the first and second touchless sensor subsystems from a source of light energy external to the first and second touchless sensor subsystems. 16. The computer input device as recited in claim 13, wherein the first and second touchless sensor subsystems are thermal sensing subsystems. 17. The computer input device as recited in claim 3, wherein the first and second touchless sensor subsystems are sound sensing subsystems.
2,600
10,821
10,821
16,223,143
2,627
According to an example aspect of the present invention, there is provided apparatus comprising at least one processing core, at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to cause the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface and to trigger startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction.
1. An apparatus comprising at least one processing core, at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to cause the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface and to trigger startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction. 2. The apparatus according to claim 1, wherein the apparatus is further configured to cause the higher capability processing device to enter and leave a hibernation state based at least partly on a determination, by the low capability processing device, concerning an instruction from outside the apparatus. 3. The apparatus according to claim 1, wherein the higher capability processing device and the low capability processing device each comprise processing cores. 4. The apparatus according to claim 1, wherein the higher capability processing core and the low capability processing device are each electrically interfaced with a shared random access memory. 5. The apparatus according to claim 1, wherein the apparatus is configured to obtain a plurality of calendar events occurring during a same day from the calendar application, to display a time axis on a screen, and to display, relative to the time axis at parts of the time axis that are selected based on scheduled times of day of the calendar events, a plurality of symbols, the symbols corresponding to at least two of the plurality of calendar events. 6. The apparatus according to claim 1, wherein the low capacity processing device is unable to render the rich media interface. 7. The apparatus according to claim 1, wherein the low capability processing device is configured to cause the higher capability processing device to hibernate responsive to a determination that a user interface type not supported by the low capability processing device is no longer requested. 8. The apparatus according to claim 1, wherein the apparatus comprises a smart watch. 9. The apparatus according to claim 1, wherein the apparatus comprises a handheld communications device. 10. The apparatus according to claim 1, wherein the apparatus comprises a personal fitness tracker. 11. The apparatus as claimed in claim 1, wherein the apparatus comprises an at least partially retractable, rotatable hardware element, and the apparatus is configured to be operable by a user by interacting with the rotatable hardware element. 12. A method comprising: causing the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface, and triggering startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction. 13. The method according to claim 12, further comprising causing the higher capability processing device to enter and leave a hibernation state based at least partly on a determination, by the low capability processing device, concerning an instruction from outside the apparatus. 14. The method according to claim 12, wherein the higher capability processing device and the low capability processing device each comprise processing cores. 15. The method according to claim 12, wherein the higher capability processing core and the low capability processing device are each electrically interfaced with a shared random access memory. 16. The method according to claim 12, further comprising obtaining a plurality of calendar events occurring during a same day from the calendar application, displaying a time axis on a screen, and displaying, relative to the time axis at parts of the time axis that are selected based on scheduled times of day of the calendar events, a plurality of symbols, the symbols corresponding to at least two of the plurality of calendar events. 17. The method according to claim 12, wherein the low capacity processing device is unable to render the rich media mode. 18. The method according to claim 12, further comprising causing, by the low capability processing device, the higher capability processing device to hibernate responsive to a determination that a user interface type not supported by the low capability processing device is no longer requested. 19. A non-transitory computer readable medium having stored thereon a set of computer readable instructions that, when executed by at least one processor, cause an apparatus to at least: cause the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface, and trigger startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction.
According to an example aspect of the present invention, there is provided apparatus comprising at least one processing core, at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to cause the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface and to trigger startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction.1. An apparatus comprising at least one processing core, at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processing core, cause the apparatus at least to cause the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface and to trigger startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction. 2. The apparatus according to claim 1, wherein the apparatus is further configured to cause the higher capability processing device to enter and leave a hibernation state based at least partly on a determination, by the low capability processing device, concerning an instruction from outside the apparatus. 3. The apparatus according to claim 1, wherein the higher capability processing device and the low capability processing device each comprise processing cores. 4. The apparatus according to claim 1, wherein the higher capability processing core and the low capability processing device are each electrically interfaced with a shared random access memory. 5. The apparatus according to claim 1, wherein the apparatus is configured to obtain a plurality of calendar events occurring during a same day from the calendar application, to display a time axis on a screen, and to display, relative to the time axis at parts of the time axis that are selected based on scheduled times of day of the calendar events, a plurality of symbols, the symbols corresponding to at least two of the plurality of calendar events. 6. The apparatus according to claim 1, wherein the low capacity processing device is unable to render the rich media interface. 7. The apparatus according to claim 1, wherein the low capability processing device is configured to cause the higher capability processing device to hibernate responsive to a determination that a user interface type not supported by the low capability processing device is no longer requested. 8. The apparatus according to claim 1, wherein the apparatus comprises a smart watch. 9. The apparatus according to claim 1, wherein the apparatus comprises a handheld communications device. 10. The apparatus according to claim 1, wherein the apparatus comprises a personal fitness tracker. 11. The apparatus as claimed in claim 1, wherein the apparatus comprises an at least partially retractable, rotatable hardware element, and the apparatus is configured to be operable by a user by interacting with the rotatable hardware element. 12. A method comprising: causing the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface, and triggering startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction. 13. The method according to claim 12, further comprising causing the higher capability processing device to enter and leave a hibernation state based at least partly on a determination, by the low capability processing device, concerning an instruction from outside the apparatus. 14. The method according to claim 12, wherein the higher capability processing device and the low capability processing device each comprise processing cores. 15. The method according to claim 12, wherein the higher capability processing core and the low capability processing device are each electrically interfaced with a shared random access memory. 16. The method according to claim 12, further comprising obtaining a plurality of calendar events occurring during a same day from the calendar application, displaying a time axis on a screen, and displaying, relative to the time axis at parts of the time axis that are selected based on scheduled times of day of the calendar events, a plurality of symbols, the symbols corresponding to at least two of the plurality of calendar events. 17. The method according to claim 12, wherein the low capacity processing device is unable to render the rich media mode. 18. The method according to claim 12, further comprising causing, by the low capability processing device, the higher capability processing device to hibernate responsive to a determination that a user interface type not supported by the low capability processing device is no longer requested. 19. A non-transitory computer readable medium having stored thereon a set of computer readable instructions that, when executed by at least one processor, cause an apparatus to at least: cause the apparatus to predict, based at least in part on a calendar application, a need for a rich media interface, and trigger startup of a higher capability processing device from among a low capability processing device and the higher capability processing device in the apparatus at a time that is selected based on the prediction.
2,600
10,822
10,822
14,282,785
2,684
A system and method used to displaying advertising content. A first app installed on a first device functions to retrieve the advertising content and to then provide the advertising content to a second app installed on a second device. The second app installed on the second device will cause the advertising content to be displayed as an overlay in a display associated with the second device.
1. A method for displaying advertising content, comprising: using a first app installed on a first device to retrieve advertising content; and causing the advertising content to be provided from the first app installed on the first device to a second app installed on a second device whereupon the second app installed on the second device will function to display the advertising content as an overlay in a display associated with the second device. 2. The method as recited in claim 1, comprising causing the first app installed on the first device to retrieve advertising content from a network server. 3. The method as recited in claim 2, comprising causing the first app installed on the first device to provide to the network server information for use by the network server in retrieving the advertising content. 4. The method as recited in claim 3, wherein the information provided to the network server by the first app installed on the first device is first provided to the first app installed on the first device by the second app installed on the second device. 5. The method as recited in claim 4, wherein the information provided to the network server comprises information that functions to identify the second device. 6. The method as recited in claim 4, wherein the information provided to the network server comprises information that functions to identify media content that the second device is causing to be displayed in the display. 7. The method as recited in claim 3, wherein the information provided to the network server comprises information that functions to identify media content that the first device is providing to the second device for display in the display. 8. The method as recited in claim 2, wherein the second app instructs the first app to initiate a retrieval of the advertising content from the network server. 9. The method as recited in claim 8, wherein the information provided to the network server comprises information that functions to identify media content that the second device is causing to be displayed in the display. 10. The method as recited in claim 2, wherein the first app retrieves advertising content from the network server at predetermined times. 11. The method as recited in claim 10, wherein the first device stores the advertising content in memory for later provision by the first app to the second app. 12. The method as recited in claim 1, wherein the first device comprises a media streaming device and the second device comprises a television. 13. The method as recited in claim 1, wherein the first device comprises a smart phone and the second device comprises a television. 14. The method as recited in claim 1, comprising data-synchronizing the first app installed on the first device and the second app installed on the second device. 15. The method as recited in claim 1, comprising using a wireless network to provide the advertising content from the first app to the second app. 16. The method as recited in claim 1, wherein the wireless network comprises an RF network.
A system and method used to displaying advertising content. A first app installed on a first device functions to retrieve the advertising content and to then provide the advertising content to a second app installed on a second device. The second app installed on the second device will cause the advertising content to be displayed as an overlay in a display associated with the second device.1. A method for displaying advertising content, comprising: using a first app installed on a first device to retrieve advertising content; and causing the advertising content to be provided from the first app installed on the first device to a second app installed on a second device whereupon the second app installed on the second device will function to display the advertising content as an overlay in a display associated with the second device. 2. The method as recited in claim 1, comprising causing the first app installed on the first device to retrieve advertising content from a network server. 3. The method as recited in claim 2, comprising causing the first app installed on the first device to provide to the network server information for use by the network server in retrieving the advertising content. 4. The method as recited in claim 3, wherein the information provided to the network server by the first app installed on the first device is first provided to the first app installed on the first device by the second app installed on the second device. 5. The method as recited in claim 4, wherein the information provided to the network server comprises information that functions to identify the second device. 6. The method as recited in claim 4, wherein the information provided to the network server comprises information that functions to identify media content that the second device is causing to be displayed in the display. 7. The method as recited in claim 3, wherein the information provided to the network server comprises information that functions to identify media content that the first device is providing to the second device for display in the display. 8. The method as recited in claim 2, wherein the second app instructs the first app to initiate a retrieval of the advertising content from the network server. 9. The method as recited in claim 8, wherein the information provided to the network server comprises information that functions to identify media content that the second device is causing to be displayed in the display. 10. The method as recited in claim 2, wherein the first app retrieves advertising content from the network server at predetermined times. 11. The method as recited in claim 10, wherein the first device stores the advertising content in memory for later provision by the first app to the second app. 12. The method as recited in claim 1, wherein the first device comprises a media streaming device and the second device comprises a television. 13. The method as recited in claim 1, wherein the first device comprises a smart phone and the second device comprises a television. 14. The method as recited in claim 1, comprising data-synchronizing the first app installed on the first device and the second app installed on the second device. 15. The method as recited in claim 1, comprising using a wireless network to provide the advertising content from the first app to the second app. 16. The method as recited in claim 1, wherein the wireless network comprises an RF network.
2,600
10,823
10,823
15,767,453
2,683
Biometric data about a vehicle occupant are received from a wearable device. Based at least in part on the biometric data, an occupant alertness and an occupant workload are determined. A rate of transmission of messages to the occupant is adjusted based at least in part on at least one of the occupant workload and the occupant alertness.
1. A system, comprising a computer including a processor and a memory, the memory storing instructions executable by the computer to: receive biometric data about a vehicle occupant from a wearable device; based at least in part on the biometric data, determine an occupant alertness and an occupant workload; and adjust a rate of transmission of messages to the occupant based at least in part on at least one of the occupant workload and the occupant alertness. 2. The system of claim 1, wherein the instructions further include instructions to decrease the rate of message transmission when the occupant workload is above a first workload threshold. 3. The system of claim 1, wherein the instructions further include instructions to increase the rate of message transmission when the occupant workload is below a second workload threshold and the occupant alertness is below an alertness threshold. 4. The system of claim 3, wherein the instructions further include instructions to send a personalized message based on the biometric data when the occupant alertness is below the alertness threshold. 5. The system of claim 1, wherein the instructions include instructions to actuate an output on the wearable device based on the message. 6. The system of claim 1, wherein the instructions further include instructions to prioritize the messages and to suppress low priority messages when the occupant workload is above a first workload threshold. 7. The system of claim 1, wherein the instructions further include instructions to adjust the rate of transmission of the messages in a user device, the user device configured to transmit the messages to at least one of the wearable device and a vehicle human machine interface. 8. The system of claim 7, wherein the instructions further include instructions to transmit the messages to the wearable device when the occupant alertness is below an alertness threshold. 9. The system of claim 7, wherein the instructions further include instructions to transmit the messages to the vehicle human machine interface when the occupant alertness is above an alertness threshold and the occupant workload is below a second workload threshold. 10. The system of claim 1, wherein the biometric data include at least one of heartbeat, blood pressure, skin temperature, and electrocardiogram. 11. A method, comprising: receiving biometric data about a vehicle occupant from a wearable device; based at least in part on the biometric data, determining an occupant alertness and an occupant workload; and adjusting a rate of transmission of messages to the occupant based at least in part on at least one of the occupant workload and the occupant alertness. 12. The method of claim 11, further comprising decreasing the rate of message transmission when the occupant workload is above a first workload threshold. 13. The method of claim 11, further comprising increasing the rate of message transmission when the occupant workload is below a second workload threshold and the occupant alertness is below an alertness threshold. 14. The method of claim 13, further comprising sending a personalized message based on the biometric data when the occupant alertness is below the alertness threshold. 15. The method of claim 11, further comprising actuating an output on the wearable device based on the message. 16. The method of claim 11, further comprising prioritizing the messages and suppressing low priority messages when the occupant workload is above a first workload threshold. 17. The method of claim 11, further comprising adjusting the rate of transmission of the messages in a user device, the user device transmitting the messages to at least one of the wearable device and a vehicle human machine interface. 18. The method of claim 17, further comprising transmitting the messages to the wearable device when the occupant alertness is below an alertness threshold. 19. The method of claim 17, further comprising transmitting the messages to the vehicle human machine interface when the occupant alertness is high and the occupant workload is below a second workload threshold. 20. The method of claim 11, wherein the biometric data include at least one of heartbeat, blood pressure, skin temperature, and electrocardiogram.
Biometric data about a vehicle occupant are received from a wearable device. Based at least in part on the biometric data, an occupant alertness and an occupant workload are determined. A rate of transmission of messages to the occupant is adjusted based at least in part on at least one of the occupant workload and the occupant alertness.1. A system, comprising a computer including a processor and a memory, the memory storing instructions executable by the computer to: receive biometric data about a vehicle occupant from a wearable device; based at least in part on the biometric data, determine an occupant alertness and an occupant workload; and adjust a rate of transmission of messages to the occupant based at least in part on at least one of the occupant workload and the occupant alertness. 2. The system of claim 1, wherein the instructions further include instructions to decrease the rate of message transmission when the occupant workload is above a first workload threshold. 3. The system of claim 1, wherein the instructions further include instructions to increase the rate of message transmission when the occupant workload is below a second workload threshold and the occupant alertness is below an alertness threshold. 4. The system of claim 3, wherein the instructions further include instructions to send a personalized message based on the biometric data when the occupant alertness is below the alertness threshold. 5. The system of claim 1, wherein the instructions include instructions to actuate an output on the wearable device based on the message. 6. The system of claim 1, wherein the instructions further include instructions to prioritize the messages and to suppress low priority messages when the occupant workload is above a first workload threshold. 7. The system of claim 1, wherein the instructions further include instructions to adjust the rate of transmission of the messages in a user device, the user device configured to transmit the messages to at least one of the wearable device and a vehicle human machine interface. 8. The system of claim 7, wherein the instructions further include instructions to transmit the messages to the wearable device when the occupant alertness is below an alertness threshold. 9. The system of claim 7, wherein the instructions further include instructions to transmit the messages to the vehicle human machine interface when the occupant alertness is above an alertness threshold and the occupant workload is below a second workload threshold. 10. The system of claim 1, wherein the biometric data include at least one of heartbeat, blood pressure, skin temperature, and electrocardiogram. 11. A method, comprising: receiving biometric data about a vehicle occupant from a wearable device; based at least in part on the biometric data, determining an occupant alertness and an occupant workload; and adjusting a rate of transmission of messages to the occupant based at least in part on at least one of the occupant workload and the occupant alertness. 12. The method of claim 11, further comprising decreasing the rate of message transmission when the occupant workload is above a first workload threshold. 13. The method of claim 11, further comprising increasing the rate of message transmission when the occupant workload is below a second workload threshold and the occupant alertness is below an alertness threshold. 14. The method of claim 13, further comprising sending a personalized message based on the biometric data when the occupant alertness is below the alertness threshold. 15. The method of claim 11, further comprising actuating an output on the wearable device based on the message. 16. The method of claim 11, further comprising prioritizing the messages and suppressing low priority messages when the occupant workload is above a first workload threshold. 17. The method of claim 11, further comprising adjusting the rate of transmission of the messages in a user device, the user device transmitting the messages to at least one of the wearable device and a vehicle human machine interface. 18. The method of claim 17, further comprising transmitting the messages to the wearable device when the occupant alertness is below an alertness threshold. 19. The method of claim 17, further comprising transmitting the messages to the vehicle human machine interface when the occupant alertness is high and the occupant workload is below a second workload threshold. 20. The method of claim 11, wherein the biometric data include at least one of heartbeat, blood pressure, skin temperature, and electrocardiogram.
2,600
10,824
10,824
16,229,116
2,674
A display device is connected to the exterior of a document processing device. The display device includes a screen positioned to display in a first direction away from the exterior. Lights are connected to the display device. The lights are positioned to emit light in second direction toward the exterior of the document processing device. A processor is operatively connected to the display device and the lights. The processor is adapted to control the lights to emit corresponding lighting of a graphic item appearing on the screen.
1. An apparatus comprising: a document processing device having an exterior; a display device connected to the exterior, wherein the display device includes a screen positioned to display in a first direction away from the exterior; lights connected to the display device, wherein the lights are connected to a surface of the display device that is opposite the screen, and wherein the lights are positioned to emit light in second direction toward the exterior; and a processor operatively connected to the display device and the lights, wherein the processor is adapted to control the lights to emit corresponding lighting of a graphic item appearing on the screen. 2. The apparatus according to claim 1, wherein the corresponding lighting appears on the exterior. 3. The apparatus according to claim 1, wherein the processor is adapted to control the lights such that the graphic item and the corresponding lighting are the same color. 4. The apparatus according to claim 3, wherein the processor is adapted to control the lights such that the graphic item and the corresponding lighting change colors in coordination. 5. The apparatus according to claim 1, wherein the graphic item changes at a first pattern of change and the corresponding lighting changes at the first pattern of change in coordination with changing of the graphic item. 6. The apparatus according to claim 1, wherein the lights comprise multi-color lights. 7. The apparatus according to claim 1, wherein different colors and patterns of the graphic item and the corresponding lighting indicate different status conditions of the document processing device including error conditions, warning conditions, active processing conditions, and processing complete conditions. 8. An apparatus comprising: a document processing device having an exterior and a bottom of the exterior adjacent a surface upon which the document processing device rests; a display device connected to the exterior, wherein the display device includes a screen positioned to display in a first direction away from the exterior; display lights connected to the display device, wherein the display lights are connected to a surface of the display device that is opposite the screen, and wherein the display lights are positioned to emit light in second direction toward the exterior; bottom lights connected to the bottom of the exterior, wherein the bottom lights are positioned to emit light in a third direction toward the surface upon which the document processing device rests; and a processor operatively connected to the display device and the display lights, wherein the processor is adapted to control the display lights and the bottom lights to emit corresponding lighting of a graphic item appearing on the screen. 9. The apparatus according to claim 8, wherein the corresponding lighting appears on the exterior and the surface upon which the document processing device rests. 10. The apparatus according to claim 8, wherein the processor is adapted to control the display lights and the bottom lights such that the graphic item and the corresponding lighting are the same color. 11. The apparatus according to claim 10, wherein the processor is adapted to control the display lights and the bottom lights such that the graphic item and the corresponding lighting change colors in coordination. 12. The apparatus according to claim 8, wherein the graphic item changes at a first pattern of change and the corresponding lighting changes at the first pattern of change in coordination with changing of the graphic item. 13. The apparatus according to claim 8, wherein the display lights and the bottom lights comprise multi-color display lights. 14. The apparatus according to claim 8, wherein different colors and patterns of the graphic item and the corresponding lighting indicate different status conditions of the document processing device including error conditions, warning conditions, active processing conditions, and processing complete conditions. 15. A method comprising: determining, by a processor operatively connected to a display device, a status of a graphic item appearing on a screen of the display device, wherein the screen is positioned to display in a first direction away from an exterior of a document processing device connected to the display device; and controlling, by the processor, display lights that are connected to a surface of the display device that is opposite the screen to emit corresponding lighting of the graphic item in a second direction toward the exterior of the document processing device, wherein the display lights are connected to the display device. 16. The method according to claim 15, wherein the display lights are controlled to emit the corresponding lighting on the exterior. 17. The method according to claim 15, wherein the display lights are controlled such that the graphic item and the corresponding lighting are the same color. 18. The method according to claim 17, wherein the display lights are controlled such that the graphic item and the corresponding lighting change colors in coordination. 19. The method according to claim 15, wherein the graphic item changes at a first pattern of change and wherein the display lights are controlled such that the corresponding lighting changes at the first pattern of change in coordination with changing of the graphic item. 20. The method according to claim 15, wherein different colors and patterns of the graphic item and the corresponding lighting indicate different status conditions of the document processing device including error conditions, warning conditions, active processing conditions, and processing complete conditions.
A display device is connected to the exterior of a document processing device. The display device includes a screen positioned to display in a first direction away from the exterior. Lights are connected to the display device. The lights are positioned to emit light in second direction toward the exterior of the document processing device. A processor is operatively connected to the display device and the lights. The processor is adapted to control the lights to emit corresponding lighting of a graphic item appearing on the screen.1. An apparatus comprising: a document processing device having an exterior; a display device connected to the exterior, wherein the display device includes a screen positioned to display in a first direction away from the exterior; lights connected to the display device, wherein the lights are connected to a surface of the display device that is opposite the screen, and wherein the lights are positioned to emit light in second direction toward the exterior; and a processor operatively connected to the display device and the lights, wherein the processor is adapted to control the lights to emit corresponding lighting of a graphic item appearing on the screen. 2. The apparatus according to claim 1, wherein the corresponding lighting appears on the exterior. 3. The apparatus according to claim 1, wherein the processor is adapted to control the lights such that the graphic item and the corresponding lighting are the same color. 4. The apparatus according to claim 3, wherein the processor is adapted to control the lights such that the graphic item and the corresponding lighting change colors in coordination. 5. The apparatus according to claim 1, wherein the graphic item changes at a first pattern of change and the corresponding lighting changes at the first pattern of change in coordination with changing of the graphic item. 6. The apparatus according to claim 1, wherein the lights comprise multi-color lights. 7. The apparatus according to claim 1, wherein different colors and patterns of the graphic item and the corresponding lighting indicate different status conditions of the document processing device including error conditions, warning conditions, active processing conditions, and processing complete conditions. 8. An apparatus comprising: a document processing device having an exterior and a bottom of the exterior adjacent a surface upon which the document processing device rests; a display device connected to the exterior, wherein the display device includes a screen positioned to display in a first direction away from the exterior; display lights connected to the display device, wherein the display lights are connected to a surface of the display device that is opposite the screen, and wherein the display lights are positioned to emit light in second direction toward the exterior; bottom lights connected to the bottom of the exterior, wherein the bottom lights are positioned to emit light in a third direction toward the surface upon which the document processing device rests; and a processor operatively connected to the display device and the display lights, wherein the processor is adapted to control the display lights and the bottom lights to emit corresponding lighting of a graphic item appearing on the screen. 9. The apparatus according to claim 8, wherein the corresponding lighting appears on the exterior and the surface upon which the document processing device rests. 10. The apparatus according to claim 8, wherein the processor is adapted to control the display lights and the bottom lights such that the graphic item and the corresponding lighting are the same color. 11. The apparatus according to claim 10, wherein the processor is adapted to control the display lights and the bottom lights such that the graphic item and the corresponding lighting change colors in coordination. 12. The apparatus according to claim 8, wherein the graphic item changes at a first pattern of change and the corresponding lighting changes at the first pattern of change in coordination with changing of the graphic item. 13. The apparatus according to claim 8, wherein the display lights and the bottom lights comprise multi-color display lights. 14. The apparatus according to claim 8, wherein different colors and patterns of the graphic item and the corresponding lighting indicate different status conditions of the document processing device including error conditions, warning conditions, active processing conditions, and processing complete conditions. 15. A method comprising: determining, by a processor operatively connected to a display device, a status of a graphic item appearing on a screen of the display device, wherein the screen is positioned to display in a first direction away from an exterior of a document processing device connected to the display device; and controlling, by the processor, display lights that are connected to a surface of the display device that is opposite the screen to emit corresponding lighting of the graphic item in a second direction toward the exterior of the document processing device, wherein the display lights are connected to the display device. 16. The method according to claim 15, wherein the display lights are controlled to emit the corresponding lighting on the exterior. 17. The method according to claim 15, wherein the display lights are controlled such that the graphic item and the corresponding lighting are the same color. 18. The method according to claim 17, wherein the display lights are controlled such that the graphic item and the corresponding lighting change colors in coordination. 19. The method according to claim 15, wherein the graphic item changes at a first pattern of change and wherein the display lights are controlled such that the corresponding lighting changes at the first pattern of change in coordination with changing of the graphic item. 20. The method according to claim 15, wherein different colors and patterns of the graphic item and the corresponding lighting indicate different status conditions of the document processing device including error conditions, warning conditions, active processing conditions, and processing complete conditions.
2,600
10,825
10,825
15,452,708
2,664
NDE probes provide unique data signals from a remote object such that can be used to accurately and precisely locate a position. With a computer processor, the data signals are converted into a positional fingerprint that is compact and easily analyzed as a file or information of probe position. The positional fingerprint is stored in association with the position or object to verify a same position at another time. Another probe detects other data signals for the object at another time. Under a similar transformation into a positional fingerprint, the position of the other probe can be matched to the first by comparing positional fingerprints. The comparison may use a probabilistic comparison and/or compare several different fingerprints from several different locations and times to ensure a best match. Position verification between probes may ensure a repair has been completed in the proper location, verify system integrity or investigate potential problems.
1. A method verifying position of a tool, the method comprising: receiving first signals associated with an object at a position of interest; transforming, with a computer processor, the first signals into a positional fingerprint of the position; storing, in a computer database, the positional fingerprint in association with the object; receiving second signals associated with the object at an unknown position of the tool; transforming, with a computer processor, the second signals into a positional fingerprint of the unknown position; comparing, with a computer processor, the positional fingerprint of the position with the positional fingerprint of the unknown position to determine if the tool is at the position of interest. 2. The method of claim 1, wherein the first signals are nondestructive testing signals from nondestructive testing being performed on the object. 3. The method of claim 1, wherein the tool is a repair tool including at least one of a welding tool and coating tool. 4. The method of claim 3, wherein the object is a nuclear power plant component, and wherein the first signals are nondestructive testing signals from nondestructive testing being performed on the component. 5. The method of claim 1, wherein the first and the second signals are 2-D visual images of the object and its surroundings. 6. The method of claim 1, wherein the transforming the first signals and transforming the second signals includes at least one of a Fourier transform of the first and the second signals, a time/space domain, a power transform, a correlation transform, a cepstrum transform, and a wavelet transform. 7. The method of claim 6, wherein the positional fingerprint of the position is a simplified association of frequencies and magnitudes of the first signals. 8. The method of claim 1, wherein the comparing includes at least one of Naive-Mayes, K-Nearest Neighbors, K-Means Clustering, Fisher Linear Discriminant, and Widrow's Adaptive Linear Combiner analysis that produces a probabilistic likelihood of a match between the positional fingerprint of the position and the positional fingerprint of the unknown position. 9. The method of claim 8, wherein the analysis produces a probability that the unknown position matches the position of interest, and wherein a match is determined based on the probability falling above a threshold probability. 10. The method of claim 1, wherein the receiving first signals, transforming the first signals, and storing the positional fingerprint are repeated for a plurality of positions of interest in a first operation, and wherein the receiving second signals, transforming the second signals, and comparing the positional fingerprint of the position with the positional fingerprint of the unknown position are repeated for a plurality of unknown positions in a second operation. 11. The method of claim 10, wherein the first operation is an inspection operation in a nuclear power plant during an operation outage, and wherein the second operation is a repair operation performed with the tool on the object later during the outage. 12. The method of claim 10, wherein the position match is determined based on a best probabilistic fit between one of the plurality of fingerprints from the first operation and one of the plurality of fingerprints from the second operation. 13. A device for nondestructive testing of a remote object in a nuclear power plant, the device comprising: a nondestructive testing emitter and receiver configured to receive first signals associated with the object at a position of interest; and a computer processor configured to, transform the first signals into a positional fingerprint of the position, and store, in a computer database, the positional fingerprint in association with the object. 14. The device of claim 13, wherein the nondestructive testing emitter and receiver is an ultrasonic emitter and receiver using ultrasonic energy that reflects from the object being tested. 15. The device of claim 13, wherein the computer processor is further configured to transform the first signals using a Fourier transform, a time/space domain, a power transform, a correlation transform, a cepstrum transform, or a wavelet transform of the first signals. 16. The device of claim 15, wherein the positional fingerprint of the position is a simplified association of frequencies and magnitudes of the first signals. 17. The device of claim 13, wherein the computer processor is further configured to, receive second signals associated with the object at an unknown position, transform the second signals into a positional fingerprint of the unknown position, and compare the positional fingerprint of the position with the positional fingerprint of the unknown position to determine if the position of interest matches the unknown position. 18. The device of claim 17, wherein the computer processor is further configured to compare based on at least one of Naive-Mayes, K-Nearest Neighbors, K-Means Clustering, Fisher Linear Discriminant, and Widrow's Adaptive Linear Combiner analysis that produces a probabilistic likelihood of a match between the positional fingerprint of the position and the positional fingerprint of the unknown position. 19. The device of claim 18, wherein the analysis produces a probability that the unknown position matches the position of interest, and wherein a match is determined based on the probability falling above a threshold probability. 20. The device of claim 13, wherein the nondestructive testing emitter and receiver are a camera, and the first signals are 2-D images associated with the object at the position of interest, and wherein the position of interest is a flaw in the object.
NDE probes provide unique data signals from a remote object such that can be used to accurately and precisely locate a position. With a computer processor, the data signals are converted into a positional fingerprint that is compact and easily analyzed as a file or information of probe position. The positional fingerprint is stored in association with the position or object to verify a same position at another time. Another probe detects other data signals for the object at another time. Under a similar transformation into a positional fingerprint, the position of the other probe can be matched to the first by comparing positional fingerprints. The comparison may use a probabilistic comparison and/or compare several different fingerprints from several different locations and times to ensure a best match. Position verification between probes may ensure a repair has been completed in the proper location, verify system integrity or investigate potential problems.1. A method verifying position of a tool, the method comprising: receiving first signals associated with an object at a position of interest; transforming, with a computer processor, the first signals into a positional fingerprint of the position; storing, in a computer database, the positional fingerprint in association with the object; receiving second signals associated with the object at an unknown position of the tool; transforming, with a computer processor, the second signals into a positional fingerprint of the unknown position; comparing, with a computer processor, the positional fingerprint of the position with the positional fingerprint of the unknown position to determine if the tool is at the position of interest. 2. The method of claim 1, wherein the first signals are nondestructive testing signals from nondestructive testing being performed on the object. 3. The method of claim 1, wherein the tool is a repair tool including at least one of a welding tool and coating tool. 4. The method of claim 3, wherein the object is a nuclear power plant component, and wherein the first signals are nondestructive testing signals from nondestructive testing being performed on the component. 5. The method of claim 1, wherein the first and the second signals are 2-D visual images of the object and its surroundings. 6. The method of claim 1, wherein the transforming the first signals and transforming the second signals includes at least one of a Fourier transform of the first and the second signals, a time/space domain, a power transform, a correlation transform, a cepstrum transform, and a wavelet transform. 7. The method of claim 6, wherein the positional fingerprint of the position is a simplified association of frequencies and magnitudes of the first signals. 8. The method of claim 1, wherein the comparing includes at least one of Naive-Mayes, K-Nearest Neighbors, K-Means Clustering, Fisher Linear Discriminant, and Widrow's Adaptive Linear Combiner analysis that produces a probabilistic likelihood of a match between the positional fingerprint of the position and the positional fingerprint of the unknown position. 9. The method of claim 8, wherein the analysis produces a probability that the unknown position matches the position of interest, and wherein a match is determined based on the probability falling above a threshold probability. 10. The method of claim 1, wherein the receiving first signals, transforming the first signals, and storing the positional fingerprint are repeated for a plurality of positions of interest in a first operation, and wherein the receiving second signals, transforming the second signals, and comparing the positional fingerprint of the position with the positional fingerprint of the unknown position are repeated for a plurality of unknown positions in a second operation. 11. The method of claim 10, wherein the first operation is an inspection operation in a nuclear power plant during an operation outage, and wherein the second operation is a repair operation performed with the tool on the object later during the outage. 12. The method of claim 10, wherein the position match is determined based on a best probabilistic fit between one of the plurality of fingerprints from the first operation and one of the plurality of fingerprints from the second operation. 13. A device for nondestructive testing of a remote object in a nuclear power plant, the device comprising: a nondestructive testing emitter and receiver configured to receive first signals associated with the object at a position of interest; and a computer processor configured to, transform the first signals into a positional fingerprint of the position, and store, in a computer database, the positional fingerprint in association with the object. 14. The device of claim 13, wherein the nondestructive testing emitter and receiver is an ultrasonic emitter and receiver using ultrasonic energy that reflects from the object being tested. 15. The device of claim 13, wherein the computer processor is further configured to transform the first signals using a Fourier transform, a time/space domain, a power transform, a correlation transform, a cepstrum transform, or a wavelet transform of the first signals. 16. The device of claim 15, wherein the positional fingerprint of the position is a simplified association of frequencies and magnitudes of the first signals. 17. The device of claim 13, wherein the computer processor is further configured to, receive second signals associated with the object at an unknown position, transform the second signals into a positional fingerprint of the unknown position, and compare the positional fingerprint of the position with the positional fingerprint of the unknown position to determine if the position of interest matches the unknown position. 18. The device of claim 17, wherein the computer processor is further configured to compare based on at least one of Naive-Mayes, K-Nearest Neighbors, K-Means Clustering, Fisher Linear Discriminant, and Widrow's Adaptive Linear Combiner analysis that produces a probabilistic likelihood of a match between the positional fingerprint of the position and the positional fingerprint of the unknown position. 19. The device of claim 18, wherein the analysis produces a probability that the unknown position matches the position of interest, and wherein a match is determined based on the probability falling above a threshold probability. 20. The device of claim 13, wherein the nondestructive testing emitter and receiver are a camera, and the first signals are 2-D images associated with the object at the position of interest, and wherein the position of interest is a flaw in the object.
2,600
10,826
10,826
16,247,632
2,653
Aspects described herein provide an apparatus that allows the user or a third party to easily monitor the volume levels of a set of headphones before and during use so as to prevent damage to the anatomy of the user's ears as a result of exposure to excessive noise volume. An analogue meter indicative of the signal level being input to or reproduced by the transducer within one or both of the headphone cans is provided, built into the outer surface of the headphone can so that it is easily visible to a user who is about to put on the headphones, or to a third party viewing the user wearing the headphones. The analogue meter is preferably indicative of sound pressure being generated by the transducer in the headphone can in which the meter is co-located, and may, for example, be a VU meter, or a PPM meter.
1. A headphone device, comprising: a pair of headphone cans, each headphone can comprising a generally cup-shaped can body, the can body having a concave configuration such that a cavity is defined, the cavity being enclosed by a user contacting portion, wherein the user contacting portion is arranged in use to face internally towards the head of the user such that it presses against or sits around the ear of the user; and a connecting band for connecting the pair of headphone cans, each headphone can being mounted to an end of the connecting band such that the user contacting portion of one headphone can is facing the user contacting portion of the other headphone can, wherein the connecting band is arranged in use to sit around the head of the user such that the headphone cans are aligned with the ears of the user, wherein at least one or more of the headphone cans comprise: a transducer for converting electrical signals to audio signals, the transducer being disposed within the cavity of the can body and arranged to face internally towards the head of the user such that in use the audio signals are directed towards the ear of the user; and a needle based volume unit (VU) meter for providing a visual indication of a signal level of the audio signal being reproduced by the transducer in the headphone can, the meter comprising a movable needle and a scale, the scale having units representative of the signal level of the audio signal, wherein the units of the scale are volume units (VU), wherein the meter is calibrated such that the sensitivity of the moveable needle to the signal level of the audio signal being reproduced by the transducer corresponds to the units of the scale, and wherein the meter is arranged on the exterior of the headphone can such that it faces outwards so as to be visible to a third party other than the wearer when being worn. 2. A device according to claim 1, further comprising a light source to illuminate the meter for use in low ambient light conditions. 3. A device according to claim 1, wherein the device comprises one or more microphones to sample the surrounding soundfield to determine an external signal incident on the device. 4. A device according to claim 3, and further comprising noise cancellation circuitry arranged to operate in dependence on the external signal. 5. A device according to claim 1, wherein the respective meters are oriented to face in any of a forwardly, side, or rearwardly direction with respect to the wearers head. 6. A device according to claim 3, wherein the needle based volume unit (VU) meter is further arranged to provide an indication of a property of the external signal determined by the one or more microphones. 7. A device according to claim 6, wherein the property is the sound level of the surrounding soundfield. 8. A wearable sound reproducing device comprising one or more sound reproduction units arranged to be worn on or in the ears of a user, wherein an audio signal is being fed to the device for reproduction by the sound reproduction units, and one or more visual displays mounted so as to be co-located with the sound reproduction units and arranged to display a signal level of the audio signal being reproduced by the sound reproduction units, wherein the one or more visual displays comprise a needle based volume unit (VU) meter, the meter comprising a movable needle and a scale, the scale having units representative of the signal level of the audio signal being reproduced by the sound reproduction units, wherein the units of the scale are volume units (VU), and wherein the meter is calibrated such that the sensitivity of the moveable needle to the signal level of the audio signal being reproduced by the sound reproduction units corresponds to the units of the scale. 9. A device according to claim 8, further comprising a light source to illuminate the meter for use in low ambient light conditions. 10. A device according to claim 8, wherein the device comprises one or more microphones to sample the surrounding soundfield to determine an external signal incident on the device. 11. A device according to claim 10, and further comprising noise cancellation circuitry arranged to operate in dependence on the external signal. 12. A device according to claim 10, wherein the needle based volume unit (VU) meter is further arranged to provide an indication of a property of the external signal determined by the one or more microphones. 13. A device according to claim 12, wherein the property is the sound level of the surrounding soundfield. 14. A device according to claim 10, wherein the wearable sound reproducing unit is a pair of headphones having respective ear covering units in which are mounted the respective sound reproduction units for each ear, respective visual displays being mounted on an outward facing surface of the ear covering units so as to be visible to a third party other than the wearer when being worn.
Aspects described herein provide an apparatus that allows the user or a third party to easily monitor the volume levels of a set of headphones before and during use so as to prevent damage to the anatomy of the user's ears as a result of exposure to excessive noise volume. An analogue meter indicative of the signal level being input to or reproduced by the transducer within one or both of the headphone cans is provided, built into the outer surface of the headphone can so that it is easily visible to a user who is about to put on the headphones, or to a third party viewing the user wearing the headphones. The analogue meter is preferably indicative of sound pressure being generated by the transducer in the headphone can in which the meter is co-located, and may, for example, be a VU meter, or a PPM meter.1. A headphone device, comprising: a pair of headphone cans, each headphone can comprising a generally cup-shaped can body, the can body having a concave configuration such that a cavity is defined, the cavity being enclosed by a user contacting portion, wherein the user contacting portion is arranged in use to face internally towards the head of the user such that it presses against or sits around the ear of the user; and a connecting band for connecting the pair of headphone cans, each headphone can being mounted to an end of the connecting band such that the user contacting portion of one headphone can is facing the user contacting portion of the other headphone can, wherein the connecting band is arranged in use to sit around the head of the user such that the headphone cans are aligned with the ears of the user, wherein at least one or more of the headphone cans comprise: a transducer for converting electrical signals to audio signals, the transducer being disposed within the cavity of the can body and arranged to face internally towards the head of the user such that in use the audio signals are directed towards the ear of the user; and a needle based volume unit (VU) meter for providing a visual indication of a signal level of the audio signal being reproduced by the transducer in the headphone can, the meter comprising a movable needle and a scale, the scale having units representative of the signal level of the audio signal, wherein the units of the scale are volume units (VU), wherein the meter is calibrated such that the sensitivity of the moveable needle to the signal level of the audio signal being reproduced by the transducer corresponds to the units of the scale, and wherein the meter is arranged on the exterior of the headphone can such that it faces outwards so as to be visible to a third party other than the wearer when being worn. 2. A device according to claim 1, further comprising a light source to illuminate the meter for use in low ambient light conditions. 3. A device according to claim 1, wherein the device comprises one or more microphones to sample the surrounding soundfield to determine an external signal incident on the device. 4. A device according to claim 3, and further comprising noise cancellation circuitry arranged to operate in dependence on the external signal. 5. A device according to claim 1, wherein the respective meters are oriented to face in any of a forwardly, side, or rearwardly direction with respect to the wearers head. 6. A device according to claim 3, wherein the needle based volume unit (VU) meter is further arranged to provide an indication of a property of the external signal determined by the one or more microphones. 7. A device according to claim 6, wherein the property is the sound level of the surrounding soundfield. 8. A wearable sound reproducing device comprising one or more sound reproduction units arranged to be worn on or in the ears of a user, wherein an audio signal is being fed to the device for reproduction by the sound reproduction units, and one or more visual displays mounted so as to be co-located with the sound reproduction units and arranged to display a signal level of the audio signal being reproduced by the sound reproduction units, wherein the one or more visual displays comprise a needle based volume unit (VU) meter, the meter comprising a movable needle and a scale, the scale having units representative of the signal level of the audio signal being reproduced by the sound reproduction units, wherein the units of the scale are volume units (VU), and wherein the meter is calibrated such that the sensitivity of the moveable needle to the signal level of the audio signal being reproduced by the sound reproduction units corresponds to the units of the scale. 9. A device according to claim 8, further comprising a light source to illuminate the meter for use in low ambient light conditions. 10. A device according to claim 8, wherein the device comprises one or more microphones to sample the surrounding soundfield to determine an external signal incident on the device. 11. A device according to claim 10, and further comprising noise cancellation circuitry arranged to operate in dependence on the external signal. 12. A device according to claim 10, wherein the needle based volume unit (VU) meter is further arranged to provide an indication of a property of the external signal determined by the one or more microphones. 13. A device according to claim 12, wherein the property is the sound level of the surrounding soundfield. 14. A device according to claim 10, wherein the wearable sound reproducing unit is a pair of headphones having respective ear covering units in which are mounted the respective sound reproduction units for each ear, respective visual displays being mounted on an outward facing surface of the ear covering units so as to be visible to a third party other than the wearer when being worn.
2,600
10,827
10,827
15,580,827
2,664
A method and related apparatus for transferring data between a mobile communication unit and a plurality of access points, wherein each of said plurality of access points and said mobile communication unit is provided with a transceiver, said mobile communication unit being arranged to travel a predetermined route, the method comprising: adjusting transmission power of at least the transceiver of the mobile communication unit substantially to a maximum level; and attenuating transmission signal of each transceiver such that signal level of a received signal in a transmission between the mobile communication unit and each of the access points is more optimal, wherein the level of attenuation is set constant at least in the mobile communication unit.
1. A method for transferring data between a mobile communication unit and a plurality of access points, wherein each of said plurality of access points and said mobile communication unit is provided with a transceiver, said mobile communication unit being arranged to travel a predetermined route, the method comprising: adjusting transmission power of at least the transceiver of the mobile communication unit substantially to a maximum level; and attenuating transmission signal of each transceiver such that signal level of a received signal in a transmission between the mobile communication unit and each of the access points is more optimal, wherein the level of attenuation is set constant at least in the mobile communication unit. 2. The method according to claim 1, further comprising: setting the level of attenuation in one or more of said plurality of access points to the same level as in the mobile communication unit. 3. The method according to claim 1, further comprising: adjusting the transmission power of the transceivers of one or more of said plurality of access points to a level sufficient for transmitting acknowledgements. 4. The method according to claim 1, further comprising: determining a group of attenuation levels, each attenuation level providing optimal received signal level in the transmission between the mobile communication unit and one of the access points; and selecting the lowest attenuation level from said group to be used at least in the mobile communication unit. 5. The method according to claim 1, further comprising: determining a group of distances, each distance indicating the shortest distance between the route of the mobile communication unit and one of the access points; and adjusting the constant attenuation level to be used on the basis of the longest distance in said group. 6. The method according to claim 1, further comprising: adjusting the transmission power of the transceivers of the access points such that interference between said plurality of access points is minimized. 7. The method according to claim 1, wherein the transmission signal is attenuated by an attenuator between the transceiver and an antenna of the access point or the mobile communication unit. 8. (canceled) 9. The method according to claim 1, wherein the mobile communication unit is arranged in a public transportation vehicle, such as a train, a tram, a metro train or a bus, arranged to travel a predetermined route. 10. (canceled) 11. A wireless offload system for transferring data between a mobile communication unit and a plurality of access points, wherein each of said plurality of access points and said mobile communication unit is provided with a transceiver, said mobile communication unit being arranged to travel a predetermined route, wherein transmission power of at least the transceiver of the mobile communication unit is arranged to be adjusted substantially to a maximum level; transmission signal of each transceiver is arranged to be attenuated such that signal level of a received signal in a transmission between the mobile communication unit and each of the access points is more optimal, wherein the level of attenuation is constant at least in the mobile communication unit. 12-20. (canceled) 21. A mobile communication unit of a wireless offload system, said mobile communication unit comprising a transceiver, and said mobile communication unit being arranged to travel a predetermined route, wherein transmission power of the transceiver of the mobile communication unit is arranged to be adjusted substantially to a maximum level; transmission signal of the transceiver is arranged to be attenuated such that signal level of a received signal in an access point is more optimal, wherein the level of attenuation is constant. 22. The mobile communication unit according to claim 21, wherein a group of attenuation levels is arranged to be determined, each attenuation level providing optimal received signal level in the transmission between the mobile communication unit and one or more access points; and the lowest attenuation level from said group is used at least in the mobile communication unit. 23. The mobile communication unit according to claim 21, wherein a group of distances is arranged to be determined, each distance indicating the shortest distance between the route of the mobile communication unit and one of the access points; and the constant attenuation level to be used is arranged to be adjusted on the basis of the longest distance in said group. 24. The mobile communication unit according to claim 21, further comprising an attenuator between the transceiver and an antenna of the mobile communication unit. 25. The mobile communication unit according to claim 21, wherein the transmission between the mobile communication unit and the access points is arranged to be carried out according to any of IEEE 802.11 standard series. 26. The mobile communication unit according to claim 21, wherein the mobile communication unit is arranged in a public transportation vehicle, such as a train, a tram, a metro train or a bus, arranged to travel a predetermined route. 27. An access point of a wireless offload system, said access point comprising a transceiver, said access point being arranged in a vicinity of a predetermined route of a mobile communication unit, wherein the transmission power of the transceivers is at a level sufficient for transmitting acknowledgements; and transmission signal of the transceiver is arranged to be attenuated such that signal level of a received signal in the mobile communication unit is more optimal. 28. The access point according to claim 27, wherein the level of attenuation is at the same level as in the mobile communication unit. 29. The access point according to claim 27, wherein the transmission power of the transceiver is adjusted to minimize interference between a plurality of access points. 30. The access point according to claim 27, comprising an attenuator between the transceiver and an antenna of the access point. 31. The access point according to claim 27, wherein the transmission between the mobile communication unit and the access points is arranged to be carried out according to any of IEEE 802.11 standard series.
A method and related apparatus for transferring data between a mobile communication unit and a plurality of access points, wherein each of said plurality of access points and said mobile communication unit is provided with a transceiver, said mobile communication unit being arranged to travel a predetermined route, the method comprising: adjusting transmission power of at least the transceiver of the mobile communication unit substantially to a maximum level; and attenuating transmission signal of each transceiver such that signal level of a received signal in a transmission between the mobile communication unit and each of the access points is more optimal, wherein the level of attenuation is set constant at least in the mobile communication unit.1. A method for transferring data between a mobile communication unit and a plurality of access points, wherein each of said plurality of access points and said mobile communication unit is provided with a transceiver, said mobile communication unit being arranged to travel a predetermined route, the method comprising: adjusting transmission power of at least the transceiver of the mobile communication unit substantially to a maximum level; and attenuating transmission signal of each transceiver such that signal level of a received signal in a transmission between the mobile communication unit and each of the access points is more optimal, wherein the level of attenuation is set constant at least in the mobile communication unit. 2. The method according to claim 1, further comprising: setting the level of attenuation in one or more of said plurality of access points to the same level as in the mobile communication unit. 3. The method according to claim 1, further comprising: adjusting the transmission power of the transceivers of one or more of said plurality of access points to a level sufficient for transmitting acknowledgements. 4. The method according to claim 1, further comprising: determining a group of attenuation levels, each attenuation level providing optimal received signal level in the transmission between the mobile communication unit and one of the access points; and selecting the lowest attenuation level from said group to be used at least in the mobile communication unit. 5. The method according to claim 1, further comprising: determining a group of distances, each distance indicating the shortest distance between the route of the mobile communication unit and one of the access points; and adjusting the constant attenuation level to be used on the basis of the longest distance in said group. 6. The method according to claim 1, further comprising: adjusting the transmission power of the transceivers of the access points such that interference between said plurality of access points is minimized. 7. The method according to claim 1, wherein the transmission signal is attenuated by an attenuator between the transceiver and an antenna of the access point or the mobile communication unit. 8. (canceled) 9. The method according to claim 1, wherein the mobile communication unit is arranged in a public transportation vehicle, such as a train, a tram, a metro train or a bus, arranged to travel a predetermined route. 10. (canceled) 11. A wireless offload system for transferring data between a mobile communication unit and a plurality of access points, wherein each of said plurality of access points and said mobile communication unit is provided with a transceiver, said mobile communication unit being arranged to travel a predetermined route, wherein transmission power of at least the transceiver of the mobile communication unit is arranged to be adjusted substantially to a maximum level; transmission signal of each transceiver is arranged to be attenuated such that signal level of a received signal in a transmission between the mobile communication unit and each of the access points is more optimal, wherein the level of attenuation is constant at least in the mobile communication unit. 12-20. (canceled) 21. A mobile communication unit of a wireless offload system, said mobile communication unit comprising a transceiver, and said mobile communication unit being arranged to travel a predetermined route, wherein transmission power of the transceiver of the mobile communication unit is arranged to be adjusted substantially to a maximum level; transmission signal of the transceiver is arranged to be attenuated such that signal level of a received signal in an access point is more optimal, wherein the level of attenuation is constant. 22. The mobile communication unit according to claim 21, wherein a group of attenuation levels is arranged to be determined, each attenuation level providing optimal received signal level in the transmission between the mobile communication unit and one or more access points; and the lowest attenuation level from said group is used at least in the mobile communication unit. 23. The mobile communication unit according to claim 21, wherein a group of distances is arranged to be determined, each distance indicating the shortest distance between the route of the mobile communication unit and one of the access points; and the constant attenuation level to be used is arranged to be adjusted on the basis of the longest distance in said group. 24. The mobile communication unit according to claim 21, further comprising an attenuator between the transceiver and an antenna of the mobile communication unit. 25. The mobile communication unit according to claim 21, wherein the transmission between the mobile communication unit and the access points is arranged to be carried out according to any of IEEE 802.11 standard series. 26. The mobile communication unit according to claim 21, wherein the mobile communication unit is arranged in a public transportation vehicle, such as a train, a tram, a metro train or a bus, arranged to travel a predetermined route. 27. An access point of a wireless offload system, said access point comprising a transceiver, said access point being arranged in a vicinity of a predetermined route of a mobile communication unit, wherein the transmission power of the transceivers is at a level sufficient for transmitting acknowledgements; and transmission signal of the transceiver is arranged to be attenuated such that signal level of a received signal in the mobile communication unit is more optimal. 28. The access point according to claim 27, wherein the level of attenuation is at the same level as in the mobile communication unit. 29. The access point according to claim 27, wherein the transmission power of the transceiver is adjusted to minimize interference between a plurality of access points. 30. The access point according to claim 27, comprising an attenuator between the transceiver and an antenna of the access point. 31. The access point according to claim 27, wherein the transmission between the mobile communication unit and the access points is arranged to be carried out according to any of IEEE 802.11 standard series.
2,600
10,828
10,828
14,968,555
2,612
A display device comprising a display screen for electronically displaying an image, said display screen having a textured surface on which a user can write without altering the image is disclosed. A method for creating a display device having a display screen for electronically displaying an image comprising the step of applying a film directly onto said display screen or applying a chemical or mechanical treatment directly onto said display screen to form a textured surface is also disclosed.
1. A display device comprising a display screen for electronically displaying an image, said display screen having a textured surface on which a user can write without altering the image. 2. A display device as claimed in claim 1 wherein the user can write on said surface using chalk. 3. A display device as claimed in claim 1 wherein the user can write on said surface using a dry erase marker. 4. A display device as claimed in claim 1 wherein said textured surface alters optical properties of the said image. 5. A display device as claimed in claim 1 wherein said textured surface is formed by a mechanical treatment of said display screen. 6. A display device as claimed in claim 1 wherein said textured surface is formed by a chemical treatment of said display screen. 7. A display device as claimed in claim 1 wherein said textured surface is formed by placing a removable film on said display screen. 8. A display device as claimed in claim 1 wherein said textured surface is transparent. 9. A display device comprising a display screen for electronically displaying an image, said display screen having a textured surface for altering the image to create a look and feel of a non-electronic work of art. 10. A display device as claimed in claim 9 wherein said textured surface alters optical properties of said image. 11. A display device as claimed in claim 9 wherein said textured surface is formed by a mechanical treatment of said display screen. 12. A display device as claimed in claim 9 wherein said textured surface is formed by a chemical treatment of said display screen. 13. A display device as claimed in claim 9 wherein said textured surface is formed by placing a removable film on said display screen. 14. A display device as claimed in claim 9 wherein said textured surface is transparent. 15. A method for creating a display device having a display screen for electronically displaying an image comprising the step of applying a chemical or mechanical treatment directly onto said display screen to form a textured surface. 16. A method as claimed in claim 15 further comprising the step of applying a protective coating on said display screen. 17. A method as claimed in claim 15 wherein said chemical or mechanical treatment involves applying a texturizing product. 18. A method for creating a display device having a display screen for electronically displaying an image comprising the step of applying a film directly onto said display screen to form a textured surface. 19. A method as claimed in claim 18 further comprising the step of applying a protective coating on said display screen. 20. A method as claimed in claim 18 wherein said film is permanently affixed to said display screen.
A display device comprising a display screen for electronically displaying an image, said display screen having a textured surface on which a user can write without altering the image is disclosed. A method for creating a display device having a display screen for electronically displaying an image comprising the step of applying a film directly onto said display screen or applying a chemical or mechanical treatment directly onto said display screen to form a textured surface is also disclosed.1. A display device comprising a display screen for electronically displaying an image, said display screen having a textured surface on which a user can write without altering the image. 2. A display device as claimed in claim 1 wherein the user can write on said surface using chalk. 3. A display device as claimed in claim 1 wherein the user can write on said surface using a dry erase marker. 4. A display device as claimed in claim 1 wherein said textured surface alters optical properties of the said image. 5. A display device as claimed in claim 1 wherein said textured surface is formed by a mechanical treatment of said display screen. 6. A display device as claimed in claim 1 wherein said textured surface is formed by a chemical treatment of said display screen. 7. A display device as claimed in claim 1 wherein said textured surface is formed by placing a removable film on said display screen. 8. A display device as claimed in claim 1 wherein said textured surface is transparent. 9. A display device comprising a display screen for electronically displaying an image, said display screen having a textured surface for altering the image to create a look and feel of a non-electronic work of art. 10. A display device as claimed in claim 9 wherein said textured surface alters optical properties of said image. 11. A display device as claimed in claim 9 wherein said textured surface is formed by a mechanical treatment of said display screen. 12. A display device as claimed in claim 9 wherein said textured surface is formed by a chemical treatment of said display screen. 13. A display device as claimed in claim 9 wherein said textured surface is formed by placing a removable film on said display screen. 14. A display device as claimed in claim 9 wherein said textured surface is transparent. 15. A method for creating a display device having a display screen for electronically displaying an image comprising the step of applying a chemical or mechanical treatment directly onto said display screen to form a textured surface. 16. A method as claimed in claim 15 further comprising the step of applying a protective coating on said display screen. 17. A method as claimed in claim 15 wherein said chemical or mechanical treatment involves applying a texturizing product. 18. A method for creating a display device having a display screen for electronically displaying an image comprising the step of applying a film directly onto said display screen to form a textured surface. 19. A method as claimed in claim 18 further comprising the step of applying a protective coating on said display screen. 20. A method as claimed in claim 18 wherein said film is permanently affixed to said display screen.
2,600
10,829
10,829
15,968,836
2,667
A method and system for fast non-invasive computer-based computation of a hemodynamic index, such as fractional flow reserve (FFR) from medical image data of a patient is disclosed. A patient-specific anatomical model of one or more arteries of a patient is automatically generated based on medical image data of the patient. Regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index are predicted using one or more trained machine learning models.
1. A method for providing fast non-invasive computer-based computation of a hemodynamic index from medical image data of a patient, comprising: automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient; and predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models. 2. The method of claim 1, wherein automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient comprises: automatically extracting centerlines and cross-sectional contours for each of the one or more arteries of the patient from the medical image data of the patient. 3. The method of claim 1, wherein predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models comprises: predicting the regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of the hemodynamic index using the one or more trained machine learning models based on extracted features related to the automatically generated patient-specific anatomical model that are input to the one or more trained machine learning models. 4. The method of claim 3, wherein the features include features extracted from the medical image data of the patient. 5. The method of claim 3, wherein the features include non-invasive patient data and measurements acquired for the patient. 6. The method of claim 3, wherein the features include features extracted from the automatically generated patient-specific anatomical model of the one or more arteries of the patient. 7. The method of claim 3, further comprising: automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient, wherein the features include the initial values computed for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model and features extracted from the initial values for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model. 8. The method of claim 7, wherein automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient comprises: computing initial values for the hemodynamic index at the plurality of locations in the automatically generated patient specific anatomical model of the one or more arteries using a second trained machine learning model. 9. The method of claim 3, further comprising: performing an automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model, wherein the features include anatomical features related to one or more stenosis regions in the one or more arteries of the patient extracted from results of the automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model. 10. The method of claim 1, further comprising: requesting user feedback for only the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index; receiving user feedback for the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index, resulting in a revised anatomical model of the one or more arteries of the patient; and computing final values for the hemodynamic index at a plurality of locations in the one or more arteries of the patient based on the revised anatomical model of the one or more arteries of the patient. 11. The method of claim 1, wherein the one or more trained machine learning models include a first trained machine learning model for predicting user feedback requirements at a tree level, a second trained machine learning model for predicting user feedback requirements at a branch level, and a third trained machine learning model for predicting user feedback requirements at a cross-sectional contour level. 12. The method of claim 1, wherein the hemodynamic index is fractional flow reserve. 13. The method of claim 1, wherein the one or more arteries of the patient comprise one or more coronary arteries of the patient. 14. An apparatus for providing fast non-invasive computation of a hemodynamic index from medical image data of a patient, comprising: a processor; and a memory storing computer program instructions which when executed by the processor cause the processor to perform operations comprising: automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient; and predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models. 15. The apparatus of claim 14, wherein predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models comprises: predicting the regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of the hemodynamic index using the one or more trained machine learning models based on extracted features related to the automatically generated patient-specific anatomical model that are input to the one or more trained machine learning models. 16. The apparatus of claim 15, wherein the operations further comprise: automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient, wherein the features include the initial values computed for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model and features extracted from the initial values for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model. 17. The apparatus of claim 15, wherein the operations further comprise: performing an automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model, wherein the features include anatomical features related to one or more stenosis regions in the one or more arteries of the patient extracted from results of the automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model. 18. The apparatus of claim 14, wherein the operations further comprise: requesting user feedback for only the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index; receiving user feedback for the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index, resulting in a revised anatomical model of the one or more arteries of the patient; and computing final values for the hemodynamic index at a plurality of locations in the one or more arteries of the patient based on the revised anatomical model of the one or more arteries of the patient. 19. A non-transitory computer readable medium storing computer program instructions for providing fast non-invasive computation of a hemodynamic index from medical image data of a patient, the computer program instructions when executed by a processor cause the processor to perform operations comprising: automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient; and predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models. 20. The non-transitory computer readable medium of claim 19, wherein predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models comprises: predicting the regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of the hemodynamic index using the one or more trained machine learning models based on extracted features related to the automatically generated patient-specific anatomical model that are input to the one or more trained machine learning models. 21. The non-transitory computer readable medium of claim 20, wherein the operations further comprise: automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient, wherein the features include the initial values computed for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model and features extracted from the initial values for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model. 22. The non-transitory computer readable medium of claim 20, wherein the operations further comprise: performing an automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model, wherein the features include anatomical features related to one or more stenosis regions in the one or more arteries of the patient extracted from results of the automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model. 23. The non-transitory computer readable medium of claim 19, wherein the operations further comprise: requesting user feedback for only the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index; receiving user feedback for the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index, resulting in a revised anatomical model of the one or more arteries of the patient; and computing final values for the hemodynamic index at a plurality of locations in the one or more arteries of the patient based on the revised anatomical model of the one or more arteries of the patient.
A method and system for fast non-invasive computer-based computation of a hemodynamic index, such as fractional flow reserve (FFR) from medical image data of a patient is disclosed. A patient-specific anatomical model of one or more arteries of a patient is automatically generated based on medical image data of the patient. Regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index are predicted using one or more trained machine learning models.1. A method for providing fast non-invasive computer-based computation of a hemodynamic index from medical image data of a patient, comprising: automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient; and predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models. 2. The method of claim 1, wherein automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient comprises: automatically extracting centerlines and cross-sectional contours for each of the one or more arteries of the patient from the medical image data of the patient. 3. The method of claim 1, wherein predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models comprises: predicting the regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of the hemodynamic index using the one or more trained machine learning models based on extracted features related to the automatically generated patient-specific anatomical model that are input to the one or more trained machine learning models. 4. The method of claim 3, wherein the features include features extracted from the medical image data of the patient. 5. The method of claim 3, wherein the features include non-invasive patient data and measurements acquired for the patient. 6. The method of claim 3, wherein the features include features extracted from the automatically generated patient-specific anatomical model of the one or more arteries of the patient. 7. The method of claim 3, further comprising: automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient, wherein the features include the initial values computed for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model and features extracted from the initial values for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model. 8. The method of claim 7, wherein automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient comprises: computing initial values for the hemodynamic index at the plurality of locations in the automatically generated patient specific anatomical model of the one or more arteries using a second trained machine learning model. 9. The method of claim 3, further comprising: performing an automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model, wherein the features include anatomical features related to one or more stenosis regions in the one or more arteries of the patient extracted from results of the automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model. 10. The method of claim 1, further comprising: requesting user feedback for only the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index; receiving user feedback for the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index, resulting in a revised anatomical model of the one or more arteries of the patient; and computing final values for the hemodynamic index at a plurality of locations in the one or more arteries of the patient based on the revised anatomical model of the one or more arteries of the patient. 11. The method of claim 1, wherein the one or more trained machine learning models include a first trained machine learning model for predicting user feedback requirements at a tree level, a second trained machine learning model for predicting user feedback requirements at a branch level, and a third trained machine learning model for predicting user feedback requirements at a cross-sectional contour level. 12. The method of claim 1, wherein the hemodynamic index is fractional flow reserve. 13. The method of claim 1, wherein the one or more arteries of the patient comprise one or more coronary arteries of the patient. 14. An apparatus for providing fast non-invasive computation of a hemodynamic index from medical image data of a patient, comprising: a processor; and a memory storing computer program instructions which when executed by the processor cause the processor to perform operations comprising: automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient; and predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models. 15. The apparatus of claim 14, wherein predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models comprises: predicting the regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of the hemodynamic index using the one or more trained machine learning models based on extracted features related to the automatically generated patient-specific anatomical model that are input to the one or more trained machine learning models. 16. The apparatus of claim 15, wherein the operations further comprise: automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient, wherein the features include the initial values computed for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model and features extracted from the initial values for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model. 17. The apparatus of claim 15, wherein the operations further comprise: performing an automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model, wherein the features include anatomical features related to one or more stenosis regions in the one or more arteries of the patient extracted from results of the automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model. 18. The apparatus of claim 14, wherein the operations further comprise: requesting user feedback for only the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index; receiving user feedback for the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index, resulting in a revised anatomical model of the one or more arteries of the patient; and computing final values for the hemodynamic index at a plurality of locations in the one or more arteries of the patient based on the revised anatomical model of the one or more arteries of the patient. 19. A non-transitory computer readable medium storing computer program instructions for providing fast non-invasive computation of a hemodynamic index from medical image data of a patient, the computer program instructions when executed by a processor cause the processor to perform operations comprising: automatically generating a patient-specific anatomical model of one or more arteries of a patient based on medical image data of the patient; and predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models. 20. The non-transitory computer readable medium of claim 19, wherein predicting regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of a hemodynamic index using one or more trained machine learning models comprises: predicting the regions in the automatically generated patient-specific anatomical model for which user feedback is required for accurate computation of the hemodynamic index using the one or more trained machine learning models based on extracted features related to the automatically generated patient-specific anatomical model that are input to the one or more trained machine learning models. 21. The non-transitory computer readable medium of claim 20, wherein the operations further comprise: automatically computing initial values for the hemodynamic index at a plurality of locations in the automatically generated patient-specific anatomical model of the one or more arteries of the patient, wherein the features include the initial values computed for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model and features extracted from the initial values for the hemodynamic index at the plurality of locations in the automatically generated patient-specific anatomical model. 22. The non-transitory computer readable medium of claim 20, wherein the operations further comprise: performing an automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model, wherein the features include anatomical features related to one or more stenosis regions in the one or more arteries of the patient extracted from results of the automated anatomical evaluation of the one or more arteries of the patient in the automatically generated patient-specific anatomical model. 23. The non-transitory computer readable medium of claim 19, wherein the operations further comprise: requesting user feedback for only the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index; receiving user feedback for the regions in the automatically generated patient-specific anatomical model predicted by the one or more trained machine learning models as requiring user feedback for accurate computation of the hemodynamic index, resulting in a revised anatomical model of the one or more arteries of the patient; and computing final values for the hemodynamic index at a plurality of locations in the one or more arteries of the patient based on the revised anatomical model of the one or more arteries of the patient.
2,600
10,830
10,830
15,974,398
2,683
An access control system and method for permitting access by a user having a mobile device, based on a determined location for the user and additional authentication information sent to the mobile device. A computer system has stored programming instructions configured to cause the computer system to identify the user at an area-based location, send a first access code to the mobile device representing additional authorizing information, receive the first access code within a predefined interval of time, and unlock the access based on a determination that the access code is valid and the identification of the user at the area-based location.
1. An access control system for enabling access by an authorized user having a mobile device, the system comprising: a computer system having stored programming instructions configured to cause the computer system to: identify the user at an area-based location; send a first access code to the mobile device representing additional authorizing information; receive the first access code within a predefined interval of time; and unlock the access based on a determination that the access code is valid and the identification of the user at the area-based location. 2. The system of claim 1 wherein the computer system is distributed among a plurality of computers. 3. The system of claim 1, wherein the computer system is further configured to learn a semantic rule by: mapping a first endpoint in a semantic network model to a first location; mapping a second endpoint in the semantic network model to a second location; determining that the user is at the first endpoint and receiving an input from the user; determining that the user is present at the second endpoint; determining an oriented link based on the determined presence of the user at the first endpoint and the second endpoint; and associating a semantic attribute to the oriented link based on the user input. 4. The system of claim 1 wherein the computer system identifies the authorized user by identity and access information provided by user. 5. The system of claim 1 wherein the computer system identifies the user at the area-based location based on a RFID device. 6. The system of claim 1, wherein the computer system unlocks the access by controlling at least one device. 7. The system of claim 1, wherein the computer system further stores a semantic network model comprising endpoints and oriented links between the endpoints, and wherein at least a subset of the oriented links is associated with semantic attributes. 8. The system of claim 7, wherein the computer system determines a semantic attribute for the user based on an identification of the user at a first endpoint corresponding to the area-based location and at a second endpoint. 9. The system of claim 7, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by sending a signal to the at least one device. 10. The system of claim 7, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by controlling the at least one device based on an access control rule. 11. The system of claim 1, wherein the computer system further stores a semantic network model having a plurality of elements including endpoints and oriented links between the endpoints and wherein at least one element from among the plurality of elements is associated with an access control rule. 12. The system of claim 11, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by controlling the at least one device based on an access control rule. 13. The system of claim 11, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by sending a signal to the at least one device. 14. The system of claim 1, wherein the computer system further stores a semantic network model having endpoints and oriented links, and further wherein the semantic network model is hierarchical. 15. The system of claim 14, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by sending a signal to the at least one device. 16. The system of claim 14, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by controlling the at least one device based on an access control rule. 17. The system of claim 14, wherein the semantic network model is further associated with at least a first and a second semantic attribute and the computer system infers the first semantic attribute at a first level of hierarchy of a semantic network graph and further the computer system infers the second semantic attribute at a second level of hierarchy of the semantic network graph, wherein the first and second semantic attributes are determined based on the same event in relation to the user. 18. The system of claim 1, wherein the computer system is configured to accept time intervals, and wherein the computer system adjusts at least one configuration setting based on the time intervals and values associated with the time intervals. 19. The system of claim 1, wherein the computer system is configured to accept security levels, and wherein the computer system adjusts at least one configuration setting based on their associated levels. 20. The system of claim 1, wherein the computer system unlocking the access is further associated with an access control rule. 21. An access control method comprising: determining a user permission of access based on an identification of the user and a localization of the user within an area; based on the determination that the user is authorized, sending an access code to a mobile device associated with the user; receiving the access code within a predefined interval of time; determining that the access code is valid; and permitting the access based on the determination that the access code is valid. 22. The method of claim 21 wherein the user identification comprises identity and access information provided by the user. 23. The method of claim 21 wherein the user is associated with an identity in a radio frequency network. 24. The method of claim 21, wherein the step of permitting the access comprises controlling at least one device. 25. The method of claim 21, wherein the access control method is performed by a computer system, the method further comprising storing, in the computer system, a semantic network model comprising endpoints and oriented links between the endpoints, and wherein at least a subset of the oriented links is associated with semantic attributes. 26. The method of claim 25, further comprising determining a semantic attribute for the user based on a validation of the user at a first endpoint and at a second endpoint. 27. The method of claim 21, wherein the access control method is performed by a computer system, the method further comprising storing, in the computer system, a semantic network model having endpoints and oriented links, wherein the semantic network model is hierarchical. 28. The method of claim 21, wherein the access control method is performed by a computer system, the method further comprising storing, in the computer system, a semantic network model having elements including endpoints and oriented links between the endpoints and wherein at least one element is associated with an access control rule. 29. The method of claim 21, wherein the permitting the access is associated with an access control rule.
An access control system and method for permitting access by a user having a mobile device, based on a determined location for the user and additional authentication information sent to the mobile device. A computer system has stored programming instructions configured to cause the computer system to identify the user at an area-based location, send a first access code to the mobile device representing additional authorizing information, receive the first access code within a predefined interval of time, and unlock the access based on a determination that the access code is valid and the identification of the user at the area-based location.1. An access control system for enabling access by an authorized user having a mobile device, the system comprising: a computer system having stored programming instructions configured to cause the computer system to: identify the user at an area-based location; send a first access code to the mobile device representing additional authorizing information; receive the first access code within a predefined interval of time; and unlock the access based on a determination that the access code is valid and the identification of the user at the area-based location. 2. The system of claim 1 wherein the computer system is distributed among a plurality of computers. 3. The system of claim 1, wherein the computer system is further configured to learn a semantic rule by: mapping a first endpoint in a semantic network model to a first location; mapping a second endpoint in the semantic network model to a second location; determining that the user is at the first endpoint and receiving an input from the user; determining that the user is present at the second endpoint; determining an oriented link based on the determined presence of the user at the first endpoint and the second endpoint; and associating a semantic attribute to the oriented link based on the user input. 4. The system of claim 1 wherein the computer system identifies the authorized user by identity and access information provided by user. 5. The system of claim 1 wherein the computer system identifies the user at the area-based location based on a RFID device. 6. The system of claim 1, wherein the computer system unlocks the access by controlling at least one device. 7. The system of claim 1, wherein the computer system further stores a semantic network model comprising endpoints and oriented links between the endpoints, and wherein at least a subset of the oriented links is associated with semantic attributes. 8. The system of claim 7, wherein the computer system determines a semantic attribute for the user based on an identification of the user at a first endpoint corresponding to the area-based location and at a second endpoint. 9. The system of claim 7, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by sending a signal to the at least one device. 10. The system of claim 7, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by controlling the at least one device based on an access control rule. 11. The system of claim 1, wherein the computer system further stores a semantic network model having a plurality of elements including endpoints and oriented links between the endpoints and wherein at least one element from among the plurality of elements is associated with an access control rule. 12. The system of claim 11, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by controlling the at least one device based on an access control rule. 13. The system of claim 11, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by sending a signal to the at least one device. 14. The system of claim 1, wherein the computer system further stores a semantic network model having endpoints and oriented links, and further wherein the semantic network model is hierarchical. 15. The system of claim 14, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by sending a signal to the at least one device. 16. The system of claim 14, wherein the semantic network model is further associated with at least one device and the computer is configured to unlock the access by controlling the at least one device based on an access control rule. 17. The system of claim 14, wherein the semantic network model is further associated with at least a first and a second semantic attribute and the computer system infers the first semantic attribute at a first level of hierarchy of a semantic network graph and further the computer system infers the second semantic attribute at a second level of hierarchy of the semantic network graph, wherein the first and second semantic attributes are determined based on the same event in relation to the user. 18. The system of claim 1, wherein the computer system is configured to accept time intervals, and wherein the computer system adjusts at least one configuration setting based on the time intervals and values associated with the time intervals. 19. The system of claim 1, wherein the computer system is configured to accept security levels, and wherein the computer system adjusts at least one configuration setting based on their associated levels. 20. The system of claim 1, wherein the computer system unlocking the access is further associated with an access control rule. 21. An access control method comprising: determining a user permission of access based on an identification of the user and a localization of the user within an area; based on the determination that the user is authorized, sending an access code to a mobile device associated with the user; receiving the access code within a predefined interval of time; determining that the access code is valid; and permitting the access based on the determination that the access code is valid. 22. The method of claim 21 wherein the user identification comprises identity and access information provided by the user. 23. The method of claim 21 wherein the user is associated with an identity in a radio frequency network. 24. The method of claim 21, wherein the step of permitting the access comprises controlling at least one device. 25. The method of claim 21, wherein the access control method is performed by a computer system, the method further comprising storing, in the computer system, a semantic network model comprising endpoints and oriented links between the endpoints, and wherein at least a subset of the oriented links is associated with semantic attributes. 26. The method of claim 25, further comprising determining a semantic attribute for the user based on a validation of the user at a first endpoint and at a second endpoint. 27. The method of claim 21, wherein the access control method is performed by a computer system, the method further comprising storing, in the computer system, a semantic network model having endpoints and oriented links, wherein the semantic network model is hierarchical. 28. The method of claim 21, wherein the access control method is performed by a computer system, the method further comprising storing, in the computer system, a semantic network model having elements including endpoints and oriented links between the endpoints and wherein at least one element is associated with an access control rule. 29. The method of claim 21, wherein the permitting the access is associated with an access control rule.
2,600
10,831
10,831
15,307,693
2,683
A pixel source for a visual presentation is disclosed. The pixel source can include a light source, a large gamut pixel, a subtractive mask, and a control input to control the subtractive mask. A display device is also disclosed comprising a light source array with a large gamut pixel array and subtractive mask array disposed thereon. In operation, wide-band light emitted from each light source can be modulated by each large gamut pixel to output a plurality of primary colors. Each subtractive mask can be controlled to block, partially transmit, or fully transmit any number of the outputted primary colors to produce color points that can be interpolated and half-toned to output a large gamut of secondaries for each pixel.
1. A pixel source for a visual presentation comprising: a light source; a large gamut pixel receiving light emitted by the light source, the large gamut pixel including a plurality of sub-pixels, each of the plurality of sub-pixels outputting a primary color; a subtractive mask including a plurality of cells overlaying the plurality of cells of the large gamut pixel, each of the plurality of cells of the subtractive mask to be (i) de-asserted to transmit the primary color, or (ii) asserted to block the primary color; and a controller to assert or de-assert each of the plurality of cells of the subtractive mask to output one or more color point of one or more of the primary colors transmitted through the de-asserted cells of the subtractive mask. 2. The pixel source of claim 1, further comprising a processing resource to dither the one or more color points to output an optical average of the one or more color points, the optical average representing a pixel of the visual presentation. 3. The pixel source of claim 1, wherein the controller operates to dynamically control the subtractive mask in response to a dynamic input signal. 4. The pixel source of claim 1, wherein the light source is a combination of wide-band red, green, and blue light emitting diodes (LEDs). 5. The pixel source of claim 1, wherein each of the plurality of cells of the subtractive mask has variable opacity, and wherein the controller further operates to partially assert one or more of the plurality of cells of the subtractive mask such that the partially asserted cells have partial opacity to partially transmit the respective primary color. 6. A display device comprising: one or more light source; an array of large gamut pixels disposed over the one or more light sources, each respective large gamut pixel receiving light emitted by the one or more light source and including a plurality of sub-pixels, each sub-pixel of the respective large gamut pixel modulating the received light to output a narrow-band primary color; an array of subtractive masks disposed over the array of large gamut pixels, each subtractive mask including a plurality of cells overlaying the plurality of sub-pixels of the respective large gamut pixel, each cell of the subtractive mask being transparent to transmit the narrow-band primary color, or opaque to block the narrow-band primary color; and a processing resource to dither one or more color points transmitted through the de-asserted cells of the subtractive mask to produce a visual presentation. 7. The display device of claim 6, wherein the array of subtractive masks is static to produce the visual presentation as a single backlit image. 8. The display device of claim 6, wherein the array of subtractive masks is dynamic such that each cell of the subtractive mask is to be (i) de-asserted to transmit the narrow-band primary color, or (ii) asserted to block the narrow-band primary color, the display device further comprising a controller to, in response to an input signal, assert or de-assert each cell of the subtractive mask to output the one or more color points comprising one or more of the narrow-band primary colors transmitted through the de-asserted cells of the subtractive mask. 9. The display device of claim 8, wherein the controller operates to dynamically control the array of subtractive masks in response to a dynamic input signal, and wherein the outputted visual presentation is a dynamic output corresponding to the dynamic input signal. 10. The display device of claim 8, wherein, prior to dithering the one or more color points, the controller operates to interpolate the one or more color points transmitted through the de-asserted cells of the subtractive mask. 11. The display device of claim 6, wherein the large gamut pixel is composed of a 3×3 grid of nine cells, and wherein each cell of the nine cells modulates the received light to output a unique narrow-band primary color. 12. The display device of claim 6, wherein the one or more light sources comprise one or more combinations of wide-band, red, green, and blue light emitting diodes (LEDs). 13. A computer-implemented method for controlling a display device to output a visual presentation, the method performed by one or more processors and comprising: receiving an input signal corresponding to the visual presentation; based on the input signal, controlling an array of subtractive masks overlaying an array of large gamut pixels, each subtractive mask disposed over a respective large gamut pixel and including a plurality of cells precisely overlaying a plurality of sub-pixels of the respective large gamut pixel, each sub-pixel outputting a narrow-band primary color, wherein controlling the array of subtractive masks includes, for each individual cell of the subtractive mask: (i) asserting the individual cell to block the narrow-band primary color; (ii) partially asserting the individual cell to partially transmit the narrow-band primary color; or (iii) de-asserting the individual cell to transmit the narrow-band primary color; wherein the one or more processors operate to output one or more color points comprising one or more of the narrow-band primary colors transmitted through the de-asserted and the partially asserted cells of the subtractive mask. 14. The computer-implemented method of claim 13, wherein the input signal is a dynamic input signal, and wherein the one or more processors operate to dynamically control the array of subtractive masks in response to the dynamic input signal to produce a dynamic output as the visual presentation. 15. The computer-implemented method of claim 13, further comprising interpolating and dithering the one or more color points to produce a secondary color representing a pixel in the visual presentation.
A pixel source for a visual presentation is disclosed. The pixel source can include a light source, a large gamut pixel, a subtractive mask, and a control input to control the subtractive mask. A display device is also disclosed comprising a light source array with a large gamut pixel array and subtractive mask array disposed thereon. In operation, wide-band light emitted from each light source can be modulated by each large gamut pixel to output a plurality of primary colors. Each subtractive mask can be controlled to block, partially transmit, or fully transmit any number of the outputted primary colors to produce color points that can be interpolated and half-toned to output a large gamut of secondaries for each pixel.1. A pixel source for a visual presentation comprising: a light source; a large gamut pixel receiving light emitted by the light source, the large gamut pixel including a plurality of sub-pixels, each of the plurality of sub-pixels outputting a primary color; a subtractive mask including a plurality of cells overlaying the plurality of cells of the large gamut pixel, each of the plurality of cells of the subtractive mask to be (i) de-asserted to transmit the primary color, or (ii) asserted to block the primary color; and a controller to assert or de-assert each of the plurality of cells of the subtractive mask to output one or more color point of one or more of the primary colors transmitted through the de-asserted cells of the subtractive mask. 2. The pixel source of claim 1, further comprising a processing resource to dither the one or more color points to output an optical average of the one or more color points, the optical average representing a pixel of the visual presentation. 3. The pixel source of claim 1, wherein the controller operates to dynamically control the subtractive mask in response to a dynamic input signal. 4. The pixel source of claim 1, wherein the light source is a combination of wide-band red, green, and blue light emitting diodes (LEDs). 5. The pixel source of claim 1, wherein each of the plurality of cells of the subtractive mask has variable opacity, and wherein the controller further operates to partially assert one or more of the plurality of cells of the subtractive mask such that the partially asserted cells have partial opacity to partially transmit the respective primary color. 6. A display device comprising: one or more light source; an array of large gamut pixels disposed over the one or more light sources, each respective large gamut pixel receiving light emitted by the one or more light source and including a plurality of sub-pixels, each sub-pixel of the respective large gamut pixel modulating the received light to output a narrow-band primary color; an array of subtractive masks disposed over the array of large gamut pixels, each subtractive mask including a plurality of cells overlaying the plurality of sub-pixels of the respective large gamut pixel, each cell of the subtractive mask being transparent to transmit the narrow-band primary color, or opaque to block the narrow-band primary color; and a processing resource to dither one or more color points transmitted through the de-asserted cells of the subtractive mask to produce a visual presentation. 7. The display device of claim 6, wherein the array of subtractive masks is static to produce the visual presentation as a single backlit image. 8. The display device of claim 6, wherein the array of subtractive masks is dynamic such that each cell of the subtractive mask is to be (i) de-asserted to transmit the narrow-band primary color, or (ii) asserted to block the narrow-band primary color, the display device further comprising a controller to, in response to an input signal, assert or de-assert each cell of the subtractive mask to output the one or more color points comprising one or more of the narrow-band primary colors transmitted through the de-asserted cells of the subtractive mask. 9. The display device of claim 8, wherein the controller operates to dynamically control the array of subtractive masks in response to a dynamic input signal, and wherein the outputted visual presentation is a dynamic output corresponding to the dynamic input signal. 10. The display device of claim 8, wherein, prior to dithering the one or more color points, the controller operates to interpolate the one or more color points transmitted through the de-asserted cells of the subtractive mask. 11. The display device of claim 6, wherein the large gamut pixel is composed of a 3×3 grid of nine cells, and wherein each cell of the nine cells modulates the received light to output a unique narrow-band primary color. 12. The display device of claim 6, wherein the one or more light sources comprise one or more combinations of wide-band, red, green, and blue light emitting diodes (LEDs). 13. A computer-implemented method for controlling a display device to output a visual presentation, the method performed by one or more processors and comprising: receiving an input signal corresponding to the visual presentation; based on the input signal, controlling an array of subtractive masks overlaying an array of large gamut pixels, each subtractive mask disposed over a respective large gamut pixel and including a plurality of cells precisely overlaying a plurality of sub-pixels of the respective large gamut pixel, each sub-pixel outputting a narrow-band primary color, wherein controlling the array of subtractive masks includes, for each individual cell of the subtractive mask: (i) asserting the individual cell to block the narrow-band primary color; (ii) partially asserting the individual cell to partially transmit the narrow-band primary color; or (iii) de-asserting the individual cell to transmit the narrow-band primary color; wherein the one or more processors operate to output one or more color points comprising one or more of the narrow-band primary colors transmitted through the de-asserted and the partially asserted cells of the subtractive mask. 14. The computer-implemented method of claim 13, wherein the input signal is a dynamic input signal, and wherein the one or more processors operate to dynamically control the array of subtractive masks in response to the dynamic input signal to produce a dynamic output as the visual presentation. 15. The computer-implemented method of claim 13, further comprising interpolating and dithering the one or more color points to produce a secondary color representing a pixel in the visual presentation.
2,600
10,832
10,832
16,394,285
2,668
The facial verification apparatus is a mobile computing apparatus, including a camera to capture an image, a display, and one or more processors. While in a lock state, the image is captured and facial verification performed using a face image, or using a detected face and in response to the face being detected. The facial verification includes a matching with respect to the detected face, or obtained face image, and a registered face information. If the verification is successful, the lock state of the apparatus may be canceled and the user allowed access to the apparatus. The lock state may be cancelled when the verification is successful and the user has been determined to have been attempting to gain access to the apparatus. Face image feedback to the user may not be displayed during the detecting for, or obtaining of, the face and/or performing of the facial verification.
1. A mobile computing apparatus, comprising: a camera configured to capture an image; a display configured to be observable by a user during a capturing of the image by the camera; and one or more processors configured to: while the mobile computing apparatus is in a lock state, control the camera to perform the capturing of the image, detect for a face in the captured image, and in response to the face having been detected, perform facial verification by performing a matching with respect to the detected face and a registered face information of a predetermined valid user; and in response to the matching being determined successful, cancel the lock state of the mobile computing apparatus and allow the user access to the mobile computing apparatus, wherein the mobile computing apparatus in the lock state does not display face image feedback to the user during the detecting for the face and/or the performing of the facial verification in response to the face having been detected. 2. The mobile computing apparatus of claim 1, wherein the mobile computing apparatus in the lock state does not display image information of the captured image, as the non-displayed face image feedback, to the user on the display during any of the detecting for the face and during the performing of the facial verification, or does not display the face image feedback to the user during the lock state. 3. The mobile computing apparatus of claim 1, wherein, for the performing of the facial verification, the one or more processors are configured to: extract a feature of the detected face; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 4. The mobile computing apparatus of claim 3, wherein, for the performing of the facial verification, the one or more processors are further configured to: generate a face image of the detected face through a normalization of the captured image or another image derived from the captured image, wherein the extraction of the feature of the detected face image includes an extraction of the feature from the generated face image. 5. The mobile computing apparatus of claim 4, wherein, for the performing of the normalization, the one or more processors are configured to: perform facial landmark detection with respect to the detected face; and generate the face image through a transformation, based on a result of the facial landmark detection, of image data for the detected face to correspond to a corresponding predetermined landmark arrangement or positioning, and/or to correspond to a predetermined face sizing. 6. The mobile computing apparatus of claim 5, wherein, for the performing of the facial landmark detection, the one or more processors are configured to perform landmark detection using an active contour model (ACM), an active shape model (ASM), an active appearance model (AAM), a supervised descent method (SDM), or a neural network model. 7. The mobile computing apparatus of claim 5, where the transformation is performed through an affine transformation. 8. The mobile computing apparatus of claim 1, wherein the one or more processors are further configured to: generate a face image of the detected face in the captured image through a normalization of the captured image or another image derived from the captured image, wherein the performing of the facial verification includes providing the generated face image to a neural network of a facial verification model used to perform the facial verification, and wherein the normalization includes a formatting of the captured image or the detected face in the captured image into a form that the neural network is configured to receive. 9. The mobile computing apparatus of claim 8, wherein the formatting of the captured image or the detected face in the captured image includes performing a transformation of the captured image or the detected face in the captured image to map a vector space corresponding to determined facial landmarks of the detected face to a vector space corresponding to an input of the neural network. 10. The mobile computing apparatus of claim 8, wherein the normalizing includes at least cropping or changing a size of the captured image or the detected face in the captured image. 11. The mobile computing apparatus of claim 8, wherein the neural network is configured to perform an extraction of one or more features from image information of the generated face image. 12. The mobile computing apparatus of claim 11, wherein the neural network is further configured to perform a comparison of the one or more extracted features, or a feature derived from the one or more extracted features, and a feature represented in the registered face information of the predetermined valid user to determine whether the facial verification is successful. 13. The mobile computing apparatus of claim 12, wherein the neural network is configured to perform extractions of plural respective features with respect to the image information of the generated face image, and the comparison includes comparing one, any two or more, or all of the plural respective features to one or more features correspondingly represented in the registered face information. 14. The mobile computing apparatus of claim 13, wherein the neural network is configured to perform extractions of plural respective features with respect to the image information of the generated face image through at least two stages of feature extraction, and the comparison includes comparing one or more features represented in the registered face information to any one, any two or more, or all extracted feature results of the at least two stages of feature extractions. 15. The mobile computing apparatus of claim 1, wherein, for the performing of the facial verification, the one or more processors are configured to: generate a face image of the detected face in the captured image through a normalization of the captured image or another image derived from the captured image; extract a feature of the generated face image based on a difference between image information of the detected face in the generated face image and a reference image that includes reference image information; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 16. The mobile computing apparatus of claim 15, wherein the difference represents a subtracting of respective pixel values of the reference image from corresponding pixel values of image information of the detected face. 17. A mobile computing apparatus, comprising: a camera configured to capture an image; a display configured to be observable by a user during a capturing of the image by the camera; and one or more processors configured to: while the mobile computing apparatus is in a lock state, control the camera to perform the capturing of the image, obtain a face image from the captured image, perform facial verification by performing a matching with respect to the obtained face image and a registered face information of a predetermined valid user, and determine whether the user is attempting to gain access to the mobile computing apparatus; and in response to the matching being determined successful and the user being determined to be attempting to gain access to the mobile computing apparatus, cancel the lock state of the mobile computing apparatus and allow the user access to the mobile computing apparatus, wherein the mobile computing apparatus in the lock state does not display face image feedback to the user during the obtaining of the face image and/or during the performing of the facial verification. 18. The apparatus of claim 17, wherein the mobile computing apparatus in the lock state does not display face image feedback to the user prior to the facial verification being performed, does not display image information of any of the captured image and the obtained face image, as the non-displayed face image feedback, to the user on the display during any of the obtaining of the face image and during the performing of the facial verification, or does not display the face image feedback to the user during the lock state. 19. The mobile computing apparatus of claim 17, wherein the determination of whether the user is attempting to gain access to the mobile computing apparatus occurs before the performing of the facial verification. 20. The mobile computing apparatus of claim 17, wherein the determination that the user is attempting to gain access to the mobile computing apparatus occurs before the performing of the facial verification. 21. The mobile computing apparatus of claim 17, wherein, for the performing of the facial verification, the one or more processors are configured to: extract a feature of the obtained face image; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 22. The mobile computing apparatus of claim 21, wherein, for the obtaining of the face image, the one or more processors are configured to normalize image information with respect to a detected face in the captured image or in another image derived from the captured image. 23. The mobile computing apparatus of claim 22, wherein, for the performing of the normalization, the one or more processors are configured to: perform facial landmark detection with respect to the detected face; and perform the obtaining of the face image through a transformation, based on a result of the facial landmark detection, of image data for the detected face in the captured image to correspond to a corresponding predetermined landmark arrangement or positioning, and/or to correspond to a predetermined face sizing. 24. The mobile computing apparatus of claim 17, wherein, for the obtaining of the face image, the one or more processors are configured to normalize image information with respect to a detected face in the captured image or in another image derived from the captured image, wherein the performing of the facial verification includes providing the obtained face image to a neural network of a facial verification model used to perform the facial verification, and wherein the normalization includes a formatting of the captured image or the detected face in the captured image into a form that the neural network is configured to receive. 25. The mobile computing apparatus of claim 24, wherein the formatting of the captured image or the detected face in the captured image includes performing a transformation of the captured image or the detected face in the captured image to map a vector space corresponding to determined facial landmarks of the detected face to a vector space corresponding to an input of the neural network. 26. The mobile computing apparatus of claim 24, wherein the normalization includes at least cropping or changing a size of the captured image or the detected face in the captured image. 27. The mobile computing apparatus of claim 24, wherein the neural network is configured to perform an extraction of one or more features from image information of the obtained face image, and perform a comparison of the one or more extracted features, or a feature derived from the one or more extracted features, and a feature represented in the registered face information of the predetermined valid user to determine whether the facial verification is successful. 28. The mobile computing apparatus of claim 17, wherein, for the obtaining of the face image, the one or more processors are configured to normalize image information with respect to a detected face in the captured image or in another image derived from the captured image, and wherein, for the performing of the facial verification, the one or more processors are configured to: extract a feature of the obtained face image based on a difference between image information of the detected face in the obtained face image and a reference image that includes reference image information; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 29. A mobile computing apparatus, comprising: a camera; a display; and one or more processors configured to control an operation of the mobile computing apparatus, including the one or more processors being configured to: control a capturing, by the camera, of an image of a user facing the display while the mobile computing apparatus is in a lock state; and control an unlocking of the mobile computing apparatus from the lock state, wherein, for the control of the unlocking of the mobile computing apparatus, the processor is configured to: while in the lock state, perform a facial verification operation that includes comparing information of a face of the user, obtained from the captured image, and information of a registered face, to determine whether the user is a valid user of the mobile computing apparatus; while in the lock state, determine whether the user is attempting to gain access to the computing apparatus; and cancel the lock state of the mobile computing apparatus in response to the user being determined to be the valid user, as a result of the facial verification operation, and the user being determined to be attempting to gain access to the computing apparatus, wherein the mobile computing apparatus in the lock state does not display preview image information, corresponding to the captured image, to the user on the display coincident with the control of the capturing of the image. 30. A mobile computing apparatus, comprising: a camera; a display; and one or more processors configured to control an operation of the mobile computing apparatus, including a control of an unlocking of the mobile computing apparatus from a lock state, wherein, for the control of the unlocking of the mobile computing apparatus from the lock state, the one or more processors are configured to: control a capturing, by the camera, of an image while in the lock state; detect for a face in the captured image while in the lock state, where the mobile computing apparatus maintains in the lock state when the detecting for the face determines that the face is detected; while in the lock state, perform a facial verification operation that compares information of the face and information of a registered face to determine whether the face matches a verified user's face; and cancel the lock state of the mobile computing apparatus when the face, of a user, is determined to match the verified user's face in the facial verification operation, wherein the display is configured to be observable by the user during the capturing of the image, wherein the mobile computing apparatus in the lock state does not display preview image information, corresponding to the captured image, to the user on the display coincident with the controlling of the capturing of the image.
The facial verification apparatus is a mobile computing apparatus, including a camera to capture an image, a display, and one or more processors. While in a lock state, the image is captured and facial verification performed using a face image, or using a detected face and in response to the face being detected. The facial verification includes a matching with respect to the detected face, or obtained face image, and a registered face information. If the verification is successful, the lock state of the apparatus may be canceled and the user allowed access to the apparatus. The lock state may be cancelled when the verification is successful and the user has been determined to have been attempting to gain access to the apparatus. Face image feedback to the user may not be displayed during the detecting for, or obtaining of, the face and/or performing of the facial verification.1. A mobile computing apparatus, comprising: a camera configured to capture an image; a display configured to be observable by a user during a capturing of the image by the camera; and one or more processors configured to: while the mobile computing apparatus is in a lock state, control the camera to perform the capturing of the image, detect for a face in the captured image, and in response to the face having been detected, perform facial verification by performing a matching with respect to the detected face and a registered face information of a predetermined valid user; and in response to the matching being determined successful, cancel the lock state of the mobile computing apparatus and allow the user access to the mobile computing apparatus, wherein the mobile computing apparatus in the lock state does not display face image feedback to the user during the detecting for the face and/or the performing of the facial verification in response to the face having been detected. 2. The mobile computing apparatus of claim 1, wherein the mobile computing apparatus in the lock state does not display image information of the captured image, as the non-displayed face image feedback, to the user on the display during any of the detecting for the face and during the performing of the facial verification, or does not display the face image feedback to the user during the lock state. 3. The mobile computing apparatus of claim 1, wherein, for the performing of the facial verification, the one or more processors are configured to: extract a feature of the detected face; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 4. The mobile computing apparatus of claim 3, wherein, for the performing of the facial verification, the one or more processors are further configured to: generate a face image of the detected face through a normalization of the captured image or another image derived from the captured image, wherein the extraction of the feature of the detected face image includes an extraction of the feature from the generated face image. 5. The mobile computing apparatus of claim 4, wherein, for the performing of the normalization, the one or more processors are configured to: perform facial landmark detection with respect to the detected face; and generate the face image through a transformation, based on a result of the facial landmark detection, of image data for the detected face to correspond to a corresponding predetermined landmark arrangement or positioning, and/or to correspond to a predetermined face sizing. 6. The mobile computing apparatus of claim 5, wherein, for the performing of the facial landmark detection, the one or more processors are configured to perform landmark detection using an active contour model (ACM), an active shape model (ASM), an active appearance model (AAM), a supervised descent method (SDM), or a neural network model. 7. The mobile computing apparatus of claim 5, where the transformation is performed through an affine transformation. 8. The mobile computing apparatus of claim 1, wherein the one or more processors are further configured to: generate a face image of the detected face in the captured image through a normalization of the captured image or another image derived from the captured image, wherein the performing of the facial verification includes providing the generated face image to a neural network of a facial verification model used to perform the facial verification, and wherein the normalization includes a formatting of the captured image or the detected face in the captured image into a form that the neural network is configured to receive. 9. The mobile computing apparatus of claim 8, wherein the formatting of the captured image or the detected face in the captured image includes performing a transformation of the captured image or the detected face in the captured image to map a vector space corresponding to determined facial landmarks of the detected face to a vector space corresponding to an input of the neural network. 10. The mobile computing apparatus of claim 8, wherein the normalizing includes at least cropping or changing a size of the captured image or the detected face in the captured image. 11. The mobile computing apparatus of claim 8, wherein the neural network is configured to perform an extraction of one or more features from image information of the generated face image. 12. The mobile computing apparatus of claim 11, wherein the neural network is further configured to perform a comparison of the one or more extracted features, or a feature derived from the one or more extracted features, and a feature represented in the registered face information of the predetermined valid user to determine whether the facial verification is successful. 13. The mobile computing apparatus of claim 12, wherein the neural network is configured to perform extractions of plural respective features with respect to the image information of the generated face image, and the comparison includes comparing one, any two or more, or all of the plural respective features to one or more features correspondingly represented in the registered face information. 14. The mobile computing apparatus of claim 13, wherein the neural network is configured to perform extractions of plural respective features with respect to the image information of the generated face image through at least two stages of feature extraction, and the comparison includes comparing one or more features represented in the registered face information to any one, any two or more, or all extracted feature results of the at least two stages of feature extractions. 15. The mobile computing apparatus of claim 1, wherein, for the performing of the facial verification, the one or more processors are configured to: generate a face image of the detected face in the captured image through a normalization of the captured image or another image derived from the captured image; extract a feature of the generated face image based on a difference between image information of the detected face in the generated face image and a reference image that includes reference image information; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 16. The mobile computing apparatus of claim 15, wherein the difference represents a subtracting of respective pixel values of the reference image from corresponding pixel values of image information of the detected face. 17. A mobile computing apparatus, comprising: a camera configured to capture an image; a display configured to be observable by a user during a capturing of the image by the camera; and one or more processors configured to: while the mobile computing apparatus is in a lock state, control the camera to perform the capturing of the image, obtain a face image from the captured image, perform facial verification by performing a matching with respect to the obtained face image and a registered face information of a predetermined valid user, and determine whether the user is attempting to gain access to the mobile computing apparatus; and in response to the matching being determined successful and the user being determined to be attempting to gain access to the mobile computing apparatus, cancel the lock state of the mobile computing apparatus and allow the user access to the mobile computing apparatus, wherein the mobile computing apparatus in the lock state does not display face image feedback to the user during the obtaining of the face image and/or during the performing of the facial verification. 18. The apparatus of claim 17, wherein the mobile computing apparatus in the lock state does not display face image feedback to the user prior to the facial verification being performed, does not display image information of any of the captured image and the obtained face image, as the non-displayed face image feedback, to the user on the display during any of the obtaining of the face image and during the performing of the facial verification, or does not display the face image feedback to the user during the lock state. 19. The mobile computing apparatus of claim 17, wherein the determination of whether the user is attempting to gain access to the mobile computing apparatus occurs before the performing of the facial verification. 20. The mobile computing apparatus of claim 17, wherein the determination that the user is attempting to gain access to the mobile computing apparatus occurs before the performing of the facial verification. 21. The mobile computing apparatus of claim 17, wherein, for the performing of the facial verification, the one or more processors are configured to: extract a feature of the obtained face image; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 22. The mobile computing apparatus of claim 21, wherein, for the obtaining of the face image, the one or more processors are configured to normalize image information with respect to a detected face in the captured image or in another image derived from the captured image. 23. The mobile computing apparatus of claim 22, wherein, for the performing of the normalization, the one or more processors are configured to: perform facial landmark detection with respect to the detected face; and perform the obtaining of the face image through a transformation, based on a result of the facial landmark detection, of image data for the detected face in the captured image to correspond to a corresponding predetermined landmark arrangement or positioning, and/or to correspond to a predetermined face sizing. 24. The mobile computing apparatus of claim 17, wherein, for the obtaining of the face image, the one or more processors are configured to normalize image information with respect to a detected face in the captured image or in another image derived from the captured image, wherein the performing of the facial verification includes providing the obtained face image to a neural network of a facial verification model used to perform the facial verification, and wherein the normalization includes a formatting of the captured image or the detected face in the captured image into a form that the neural network is configured to receive. 25. The mobile computing apparatus of claim 24, wherein the formatting of the captured image or the detected face in the captured image includes performing a transformation of the captured image or the detected face in the captured image to map a vector space corresponding to determined facial landmarks of the detected face to a vector space corresponding to an input of the neural network. 26. The mobile computing apparatus of claim 24, wherein the normalization includes at least cropping or changing a size of the captured image or the detected face in the captured image. 27. The mobile computing apparatus of claim 24, wherein the neural network is configured to perform an extraction of one or more features from image information of the obtained face image, and perform a comparison of the one or more extracted features, or a feature derived from the one or more extracted features, and a feature represented in the registered face information of the predetermined valid user to determine whether the facial verification is successful. 28. The mobile computing apparatus of claim 17, wherein, for the obtaining of the face image, the one or more processors are configured to normalize image information with respect to a detected face in the captured image or in another image derived from the captured image, and wherein, for the performing of the facial verification, the one or more processors are configured to: extract a feature of the obtained face image based on a difference between image information of the detected face in the obtained face image and a reference image that includes reference image information; and determine whether the facial verification is successful based on a result of a comparing of the extracted feature and a feature represented in the registered face information of the predetermined valid user. 29. A mobile computing apparatus, comprising: a camera; a display; and one or more processors configured to control an operation of the mobile computing apparatus, including the one or more processors being configured to: control a capturing, by the camera, of an image of a user facing the display while the mobile computing apparatus is in a lock state; and control an unlocking of the mobile computing apparatus from the lock state, wherein, for the control of the unlocking of the mobile computing apparatus, the processor is configured to: while in the lock state, perform a facial verification operation that includes comparing information of a face of the user, obtained from the captured image, and information of a registered face, to determine whether the user is a valid user of the mobile computing apparatus; while in the lock state, determine whether the user is attempting to gain access to the computing apparatus; and cancel the lock state of the mobile computing apparatus in response to the user being determined to be the valid user, as a result of the facial verification operation, and the user being determined to be attempting to gain access to the computing apparatus, wherein the mobile computing apparatus in the lock state does not display preview image information, corresponding to the captured image, to the user on the display coincident with the control of the capturing of the image. 30. A mobile computing apparatus, comprising: a camera; a display; and one or more processors configured to control an operation of the mobile computing apparatus, including a control of an unlocking of the mobile computing apparatus from a lock state, wherein, for the control of the unlocking of the mobile computing apparatus from the lock state, the one or more processors are configured to: control a capturing, by the camera, of an image while in the lock state; detect for a face in the captured image while in the lock state, where the mobile computing apparatus maintains in the lock state when the detecting for the face determines that the face is detected; while in the lock state, perform a facial verification operation that compares information of the face and information of a registered face to determine whether the face matches a verified user's face; and cancel the lock state of the mobile computing apparatus when the face, of a user, is determined to match the verified user's face in the facial verification operation, wherein the display is configured to be observable by the user during the capturing of the image, wherein the mobile computing apparatus in the lock state does not display preview image information, corresponding to the captured image, to the user on the display coincident with the controlling of the capturing of the image.
2,600
10,833
10,833
14,843,375
2,611
According to an embodiment of the present disclosure, a method and an electronic device for providing a Virtual Reality (VR) service by the electronic device are provided. The method includes: determining whether the electronic device is connected with a Head-Mounted Device (HMD); if the electronic device is connected with the HMD, determining whether a user is wearing the HMD while the electronic device is connected with the HMD; and if the user is wearing the HMD while the electronic device is connected with the HMD, switching an operation mode of the electronic device to a first operation mode in which the electronic device provides the VR service to the user.
1. A method for providing a Virtual Reality (VR) service by an electronic device, the method comprising: determining whether the electronic device is connected with a Head-Mounted Device (HMD); if the electronic device is connected with the HMD, determining whether a user is wearing the HMD while the electronic device is connected with the HMD; and if the user is wearing the HMD while the electronic device is connected with the HMD, switching an operation mode of the electronic device to a first operation mode in which the electronic device provides the VR service to the user. 2. The method of claim 1, further comprising: if the user is not wearing the HMD while the electronic derive is connected with the HMD, maintaining a second operation mode. 3. The method of claim 2, wherein determining whether the electronic device is connected with the HMD comprises: receiving, from the HMD, an electrical signal indicating that the electronic device is connected with the HMD; and switching the operation mode of the electronic device to the second operation mode. 4. The method of claim 1, wherein determining whether the user is wearing the HMD comprises: if receiving from the HMD an electrical signal indicating that the user is wearing the HMD, determining, that the user wears the HMD. 5. The method of claim 4, further comprising: if the electronic device does not receive the electrical signal indicating that the user is wearing the HMD, determining that the user does not wear the HMD. 6. The method of claim 1, further comprising: if the electronic device is connected with the HMD, displaying a temporary image and driving a Three-Dimensional (3D) engine to provide the VR service. 7. The method of claim 6, wherein the temporary image includes at least one of one of a black image, a logo image, and an image preset by the user. 8. The method of claim 1, wherein determining whether the electronic device is connected with the HMD comprises sensing whether the electronic device is connected with the HMD through a previously provided communication interface. 9. The method of claim 2, wherein determining whether the user is wearing the HMD while the electronic device is connected with the HMD comprises: sensing whether the user is wearing the HMD through a previously provided sensor module; and if sensing that the user wears the HMD through the previously provided sensor module previously provided, determining that the user is wearing the HMD. 10. An electronic device for providing a Virtual Reality (VR) service, the electronic device comprising: a display; and a processor configured to: determine whether a user is wearing a Head-Mounted Device (HMD), and if the user is wearing the HMD, switching to a first operation mode of the electronic device in which the electronic device provides the VR service to the user through the display. 11. The electronic device of claim 10, wherein, if the user is not wearing the HMD, the processor maintains a second operation mode. 12. The electronic device of claim 10, further comprising a communication interface configured to receive, from the HMD, an electrical signal indicating that the electronic device is connected with the HMD, wherein the processor is further configured to determine whether the user is wearing the HMD if the communication interface receives, from the HMD, the electrical signal indicating that the electronic device is connected with the HMD. 13. The electronic device of claim 12, wherein the processor is further configured to determine that the user is not wearing the HMD, if the communication interface does not receive the electrical signal indicating that the electronic device is connected with the HMD. 14. The electronic device of claim 10, wherein the processor is further configured to display a temporary image through the display and drive a Three-Dimensional (3D) engine to provide the VR service, if the processor determines that the electronic device is connected with the HMD. 15. The electronic device of claim 14, wherein the temporary image includes at least one of a black image, a logo image, and an image preset by the user. 16. The electronic device of claim 10, further comprising a communication interface configured to sense whether the electronic device is connected with the HMD. 17. The electronic device of claim 10, further comprising a sensor module configured to sense whether the user is wearing the HMD, wherein the processor is further configured to determine the user is wearing the HMD if the sensor module senses that the user is wearing the HMD. 18. A method for providing a Virtual Reality (VR) service by a Head-Mounted Device (HMD), the method comprising: sensing whether a user is wearing the HMD while the HMD is connected with an electronic device; and if the user is wearing the HMD, transmitting, to the electronic device, a first electrical signal indicating that the user is wearing the HMD. 19. The method of claim 18, further comprising: before sensing whether the user is wearing the HMD, transmitting, to the electronic device, a second electrical signal indicating that the HMD is connected with the electronic device. 20. The method of claim 18, wherein sensing whether the user is wearing the HMD comprises sensing whether the user is wearing the HMD through a sensor module included in the HMD. 21. A Head-Mounted Device (HMD) for providing a Virtual Reality (VR) service, the HMD comprising: a sensor module configured to sense whether a user is wearing the HMD; and a communication interface configured to transmit, to an electronic device, if the user is wearing the HMD, a first electrical signal indicating that the user is wearing the HMD. 22. The HMD of claim 21, wherein the communication interface is further configured to transmits, to the electronic device, a second electrical signal indicating that the HMD is connected with the electronic device.
According to an embodiment of the present disclosure, a method and an electronic device for providing a Virtual Reality (VR) service by the electronic device are provided. The method includes: determining whether the electronic device is connected with a Head-Mounted Device (HMD); if the electronic device is connected with the HMD, determining whether a user is wearing the HMD while the electronic device is connected with the HMD; and if the user is wearing the HMD while the electronic device is connected with the HMD, switching an operation mode of the electronic device to a first operation mode in which the electronic device provides the VR service to the user.1. A method for providing a Virtual Reality (VR) service by an electronic device, the method comprising: determining whether the electronic device is connected with a Head-Mounted Device (HMD); if the electronic device is connected with the HMD, determining whether a user is wearing the HMD while the electronic device is connected with the HMD; and if the user is wearing the HMD while the electronic device is connected with the HMD, switching an operation mode of the electronic device to a first operation mode in which the electronic device provides the VR service to the user. 2. The method of claim 1, further comprising: if the user is not wearing the HMD while the electronic derive is connected with the HMD, maintaining a second operation mode. 3. The method of claim 2, wherein determining whether the electronic device is connected with the HMD comprises: receiving, from the HMD, an electrical signal indicating that the electronic device is connected with the HMD; and switching the operation mode of the electronic device to the second operation mode. 4. The method of claim 1, wherein determining whether the user is wearing the HMD comprises: if receiving from the HMD an electrical signal indicating that the user is wearing the HMD, determining, that the user wears the HMD. 5. The method of claim 4, further comprising: if the electronic device does not receive the electrical signal indicating that the user is wearing the HMD, determining that the user does not wear the HMD. 6. The method of claim 1, further comprising: if the electronic device is connected with the HMD, displaying a temporary image and driving a Three-Dimensional (3D) engine to provide the VR service. 7. The method of claim 6, wherein the temporary image includes at least one of one of a black image, a logo image, and an image preset by the user. 8. The method of claim 1, wherein determining whether the electronic device is connected with the HMD comprises sensing whether the electronic device is connected with the HMD through a previously provided communication interface. 9. The method of claim 2, wherein determining whether the user is wearing the HMD while the electronic device is connected with the HMD comprises: sensing whether the user is wearing the HMD through a previously provided sensor module; and if sensing that the user wears the HMD through the previously provided sensor module previously provided, determining that the user is wearing the HMD. 10. An electronic device for providing a Virtual Reality (VR) service, the electronic device comprising: a display; and a processor configured to: determine whether a user is wearing a Head-Mounted Device (HMD), and if the user is wearing the HMD, switching to a first operation mode of the electronic device in which the electronic device provides the VR service to the user through the display. 11. The electronic device of claim 10, wherein, if the user is not wearing the HMD, the processor maintains a second operation mode. 12. The electronic device of claim 10, further comprising a communication interface configured to receive, from the HMD, an electrical signal indicating that the electronic device is connected with the HMD, wherein the processor is further configured to determine whether the user is wearing the HMD if the communication interface receives, from the HMD, the electrical signal indicating that the electronic device is connected with the HMD. 13. The electronic device of claim 12, wherein the processor is further configured to determine that the user is not wearing the HMD, if the communication interface does not receive the electrical signal indicating that the electronic device is connected with the HMD. 14. The electronic device of claim 10, wherein the processor is further configured to display a temporary image through the display and drive a Three-Dimensional (3D) engine to provide the VR service, if the processor determines that the electronic device is connected with the HMD. 15. The electronic device of claim 14, wherein the temporary image includes at least one of a black image, a logo image, and an image preset by the user. 16. The electronic device of claim 10, further comprising a communication interface configured to sense whether the electronic device is connected with the HMD. 17. The electronic device of claim 10, further comprising a sensor module configured to sense whether the user is wearing the HMD, wherein the processor is further configured to determine the user is wearing the HMD if the sensor module senses that the user is wearing the HMD. 18. A method for providing a Virtual Reality (VR) service by a Head-Mounted Device (HMD), the method comprising: sensing whether a user is wearing the HMD while the HMD is connected with an electronic device; and if the user is wearing the HMD, transmitting, to the electronic device, a first electrical signal indicating that the user is wearing the HMD. 19. The method of claim 18, further comprising: before sensing whether the user is wearing the HMD, transmitting, to the electronic device, a second electrical signal indicating that the HMD is connected with the electronic device. 20. The method of claim 18, wherein sensing whether the user is wearing the HMD comprises sensing whether the user is wearing the HMD through a sensor module included in the HMD. 21. A Head-Mounted Device (HMD) for providing a Virtual Reality (VR) service, the HMD comprising: a sensor module configured to sense whether a user is wearing the HMD; and a communication interface configured to transmit, to an electronic device, if the user is wearing the HMD, a first electrical signal indicating that the user is wearing the HMD. 22. The HMD of claim 21, wherein the communication interface is further configured to transmits, to the electronic device, a second electrical signal indicating that the HMD is connected with the electronic device.
2,600
10,834
10,834
15,262,994
2,654
A queueless contact center is described along with various methods and mechanisms for administering the same. The contact center proposed herein provides the ability to, among other things, achieve true one-to-one matching. Solutions are also provided for managing data structures utilized by the queueless contact center. Furthermore, mechanisms for generating traditional queue-based performance views and metrics for the queueless contact center are proposed to help facilitate a smooth transition from traditional queue-based contact centers to the next generation contact centers described herein.
1. A method, comprising: determining that an element in a contact center has become available for assignment; determining a search criteria associated with processing requirements or capabilities of the element; scanning a bitmap for the search criteria to identify counter-elements in the contact center that are qualified to be assigned to the element; and based on the scanning, determining a set of counter-elements that are qualified to be assigned to the element. 2. The method of claim 1, wherein the element comprises a work item and wherein the counter-element comprises a resource. 3. The method of claim 2, wherein the bitmap is a resource bitmap that is associated with a resource pool and each bit in the resource bitmap corresponds to a single resource in the resource pool. 4. The method of claim 3, wherein a value of each bit in the resource bitmap is assigned either a 1 or 0 depending upon whether the resource is qualified to be assigned to the work item. 5. The method of claim 3, wherein every resource in the contact center is represented in the resource pool and resource bitmap. 6. The method of claim 2, wherein the resource bitmap also identifies a resource availability, wherein a qualification of a resource is analyzed in prior to analyzing an availability of a resource, and wherein unqualified resources are not analyzed for availability. 7. The method of claim 1, wherein the bitmap is continuous in memory. 8. The method of claim 1, wherein multiple bits of the bitmap are evaluated simultaneously during the scanning of the bitmap. 9. The method of claim 1, wherein the element comprises a resource and wherein the counter-element comprises a work item and wherein less than five percent of the counter-elements represented in the bitmap are included in the set of counter-elements. 10. A computer readable medium having stored thereon instructions that cause a computing system to execute a method, the instructions comprising: instructions configured to determine that an element in a contact center has become available for assignment; instructions configured to determine a search criteria associated with processing requirements or capabilities of the element; instructions configured to scan a bitmap for the search criteria to identify counter-elements in the contact center that are qualified to be assigned to the element; and instructions configured to, based on the scanning, determine a set of counter-elements that are qualified to be assigned to the element. 11. The computer readable medium of claim 10, further comprising instructions configured to determine a selected counter-element from the set of counter-elements and assign the element to the selected counter-element. 12. The computer readable medium of claim 11, wherein the element comprises a work item and wherein the counter-element comprises a resource. 13. The computer readable medium of claim 12, wherein the bitmap is a resource bitmap that is associated with a resource pool and each bit in the resource bitmap corresponds to a single resource in the resource pool. 14. The computer readable medium of claim 13, wherein a value of each bit in the resource bitmap is assigned either a 1 or 0 depending upon whether the resource is qualified to be assigned to the work item and wherein every resource in the contact center is represented in the resource pool and resource bitmap. 15. The computer readable medium of claim 14, wherein the value assigned to a bit in the resource bitmap depends upon attribute combinations of the associated resource and whether the resource has an attribute combination equal to an attribute combination of the work item. 16. The computer readable medium of claim 11, wherein multiple bits of the bitmap are evaluated simultaneously during the scanning of the bitmap. 17. The computer readable medium of claim 11, wherein the element comprises a resource and wherein the counter-element comprises a work item. 18. A work assignment mechanism, comprising: a work assignment engine configured determine that an element in a contact center has become available for assignment, determine a search criteria associated with processing requirements or capabilities of the element, scan a bitmap for the search criteria to identify counter-elements in the contact center that are qualified to be assigned to the element, and, based on the scanning, determine a set of counter-elements that are qualified to be assigned to the element. 19. The system of claim 18, wherein the work assignment engine is further configured to assign a selected counter-element from the set of counter-elements to the element. 20. The system of claim 19, wherein the element comprises a work item and wherein the counter-element comprises a resource, wherein the bitmap is a resource bitmap that is associated with a resource pool and each bit in the resource bitmap corresponds to a single resource in the resource pool, wherein a value of each bit in the resource bitmap is assigned either a 1 or 0 depending upon whether the resource is qualified to be assigned to the work item, wherein every resource in the contact center is represented in the resource pool and resource bitmap, and wherein less than five percent of the counter-elements represented in the bitmap are included in the set of counter-elements.
A queueless contact center is described along with various methods and mechanisms for administering the same. The contact center proposed herein provides the ability to, among other things, achieve true one-to-one matching. Solutions are also provided for managing data structures utilized by the queueless contact center. Furthermore, mechanisms for generating traditional queue-based performance views and metrics for the queueless contact center are proposed to help facilitate a smooth transition from traditional queue-based contact centers to the next generation contact centers described herein.1. A method, comprising: determining that an element in a contact center has become available for assignment; determining a search criteria associated with processing requirements or capabilities of the element; scanning a bitmap for the search criteria to identify counter-elements in the contact center that are qualified to be assigned to the element; and based on the scanning, determining a set of counter-elements that are qualified to be assigned to the element. 2. The method of claim 1, wherein the element comprises a work item and wherein the counter-element comprises a resource. 3. The method of claim 2, wherein the bitmap is a resource bitmap that is associated with a resource pool and each bit in the resource bitmap corresponds to a single resource in the resource pool. 4. The method of claim 3, wherein a value of each bit in the resource bitmap is assigned either a 1 or 0 depending upon whether the resource is qualified to be assigned to the work item. 5. The method of claim 3, wherein every resource in the contact center is represented in the resource pool and resource bitmap. 6. The method of claim 2, wherein the resource bitmap also identifies a resource availability, wherein a qualification of a resource is analyzed in prior to analyzing an availability of a resource, and wherein unqualified resources are not analyzed for availability. 7. The method of claim 1, wherein the bitmap is continuous in memory. 8. The method of claim 1, wherein multiple bits of the bitmap are evaluated simultaneously during the scanning of the bitmap. 9. The method of claim 1, wherein the element comprises a resource and wherein the counter-element comprises a work item and wherein less than five percent of the counter-elements represented in the bitmap are included in the set of counter-elements. 10. A computer readable medium having stored thereon instructions that cause a computing system to execute a method, the instructions comprising: instructions configured to determine that an element in a contact center has become available for assignment; instructions configured to determine a search criteria associated with processing requirements or capabilities of the element; instructions configured to scan a bitmap for the search criteria to identify counter-elements in the contact center that are qualified to be assigned to the element; and instructions configured to, based on the scanning, determine a set of counter-elements that are qualified to be assigned to the element. 11. The computer readable medium of claim 10, further comprising instructions configured to determine a selected counter-element from the set of counter-elements and assign the element to the selected counter-element. 12. The computer readable medium of claim 11, wherein the element comprises a work item and wherein the counter-element comprises a resource. 13. The computer readable medium of claim 12, wherein the bitmap is a resource bitmap that is associated with a resource pool and each bit in the resource bitmap corresponds to a single resource in the resource pool. 14. The computer readable medium of claim 13, wherein a value of each bit in the resource bitmap is assigned either a 1 or 0 depending upon whether the resource is qualified to be assigned to the work item and wherein every resource in the contact center is represented in the resource pool and resource bitmap. 15. The computer readable medium of claim 14, wherein the value assigned to a bit in the resource bitmap depends upon attribute combinations of the associated resource and whether the resource has an attribute combination equal to an attribute combination of the work item. 16. The computer readable medium of claim 11, wherein multiple bits of the bitmap are evaluated simultaneously during the scanning of the bitmap. 17. The computer readable medium of claim 11, wherein the element comprises a resource and wherein the counter-element comprises a work item. 18. A work assignment mechanism, comprising: a work assignment engine configured determine that an element in a contact center has become available for assignment, determine a search criteria associated with processing requirements or capabilities of the element, scan a bitmap for the search criteria to identify counter-elements in the contact center that are qualified to be assigned to the element, and, based on the scanning, determine a set of counter-elements that are qualified to be assigned to the element. 19. The system of claim 18, wherein the work assignment engine is further configured to assign a selected counter-element from the set of counter-elements to the element. 20. The system of claim 19, wherein the element comprises a work item and wherein the counter-element comprises a resource, wherein the bitmap is a resource bitmap that is associated with a resource pool and each bit in the resource bitmap corresponds to a single resource in the resource pool, wherein a value of each bit in the resource bitmap is assigned either a 1 or 0 depending upon whether the resource is qualified to be assigned to the work item, wherein every resource in the contact center is represented in the resource pool and resource bitmap, and wherein less than five percent of the counter-elements represented in the bitmap are included in the set of counter-elements.
2,600
10,835
10,835
15,973,475
2,624
Methods, systems and devices are provided for providing user interface navigation of screen display metrics of a device. In one example, a device is configured for capture of activity data for a user. The device includes a housing and a screen disposed on the housing to display a plurality of metrics which include metrics that characterize the activity captured over time. The device further includes a sensor disposed in the housing to capture physical contact upon the housing. A processor is included to process the physical contact to determine if the physical contact qualifies as an input. The processor enables the screen from an off state when the physical contact qualifies as the input. The screen is configured to display one or more of the plurality of metrics in accordance with a scroll order, and a first metric displayed in response to the physical contact that qualifies as the input.
1. (canceled) 2. A method, comprising: receiving sensor data corresponding to a first physical contact on a device, wherein the sensor data is generated by one or more motion sensors of the device; determining, based at least on the sensor data, that the first physical contact qualifies as a first type of input; in response to determining that the first physical contact qualifies as a first type of input, causing a screen of the device to display a first metric; determining, based at least on additional sensor data corresponding to a second physical contact on the device that is received subsequent to the first physical contact, that the second physical contact qualifies as a second type of input; and in response to determining that the second physical contact qualifies as a second type of input, causing the screen of the device to display a second metric, wherein the second metric replaces the first metric on the screen of the device. 3. The method of claim 2, further comprising determining, based at least on a motion profile of the sensor data, whether the first physical contact qualifies as an input to be used to control the device or activity data to be used to update one of the metrics maintained by the device. 4. The method of claim 2, wherein only one metric of the plurality of metrics is displayed on the screen of the device at a time. 5. The method of claim 2, further comprising causing the screen to transition from displaying the first metric to displaying the second metric in a specific direction. 6. The method of claim 2, wherein the device is a smartwatch. 7. The method of claim 2, wherein the first metric and the second metric are arranged in a scroll order, wherein the second metric immediately follows the first metric in the scroll order. 8. The method of claim 7, further comprising, for each subsequent physical contact that qualifies as a second type of input to the device, causing a next metric in the scroll order to be displayed on the screen. 9. The method of claim 7, further comprising determining the scroll order based on a user account associated with the device. 10. The method of claim 7, further comprising receiving the scroll order via a wireless connection from a portable computing device configured to communicate with the device. 11. The method of claim 2, wherein the first metric comprises a time metric and the second metric comprises an activity metric indicative of activity data captured over time by the device. 12. A device, comprising: one or more motion sensors configured to generate sensor data; a screen; and one or more processors configured to: receive sensor data corresponding to a first physical contact on the device; determine, based at least on the sensor data, that the first physical contact qualifies as a first type of input; in response to determining that the first physical contact qualifies as a first type of input, cause the screen to display a first metric; determine, based at least on additional sensor data corresponding to a second physical contact on the device that is received subsequent to the first physical contact, that the second physical contact qualifies as a second type of input; and in response to determining that the second physical contact qualifies as a second type of input, cause the screen to display a second metric, wherein the second metric replaces the first metric on the screen. 13. The device of claim 11, further comprising a wrist-attachable structure. 14. The device of claim 12, wherein the one or more motion sensors include an accelerometer. 15. The device of claim 12, further comprising a wireless transceiver configured to transmit the plurality of metrics to a portable computing device. 16. The device of claim 12, wherein at least one of the first metric and the second metric comprises a step count, floors climbed, stairs climbed, distance traveled, active minutes, or calories burned. 17. The device of claim 12, wherein the first type of input is the same as the second type of input. 18. The device of claim 12, wherein the first type of input is different from the second type of input. 19. The device of claim 12, wherein the first type of input is a double tap and the second type of input is a single tap. 20. The device of claim 12, wherein the one or more processors are further configured to: subsequent to the second physical contact, cause the screen to be deactivated; and in response to detecting another input to the device, cause the screen to be reactivated such that the first metric is displayed on the screen. 21. Non-transitory physical computer storage storing instructions, which, when executed by one or more processors, configure the one or more processors to: receive sensor data corresponding to a first physical contact on a device, the sensor data generated by one or more motion sensors of the device; determine, based at least on the sensor data, that the first physical contact qualifies as a first type of input; in response to determining that the first physical contact qualifies as a first type of input, cause a screen of the device to display a first metric; determine, based at least on additional sensor data corresponding to a second physical contact on the device that is received subsequent to the first physical contact, that the second physical contact qualifies as a second type of input; and in response to determining that the second physical contact qualifies as a second type of input, cause the screen of the device to display a second metric, wherein the second metric replaces the first metric on the screen.
Methods, systems and devices are provided for providing user interface navigation of screen display metrics of a device. In one example, a device is configured for capture of activity data for a user. The device includes a housing and a screen disposed on the housing to display a plurality of metrics which include metrics that characterize the activity captured over time. The device further includes a sensor disposed in the housing to capture physical contact upon the housing. A processor is included to process the physical contact to determine if the physical contact qualifies as an input. The processor enables the screen from an off state when the physical contact qualifies as the input. The screen is configured to display one or more of the plurality of metrics in accordance with a scroll order, and a first metric displayed in response to the physical contact that qualifies as the input.1. (canceled) 2. A method, comprising: receiving sensor data corresponding to a first physical contact on a device, wherein the sensor data is generated by one or more motion sensors of the device; determining, based at least on the sensor data, that the first physical contact qualifies as a first type of input; in response to determining that the first physical contact qualifies as a first type of input, causing a screen of the device to display a first metric; determining, based at least on additional sensor data corresponding to a second physical contact on the device that is received subsequent to the first physical contact, that the second physical contact qualifies as a second type of input; and in response to determining that the second physical contact qualifies as a second type of input, causing the screen of the device to display a second metric, wherein the second metric replaces the first metric on the screen of the device. 3. The method of claim 2, further comprising determining, based at least on a motion profile of the sensor data, whether the first physical contact qualifies as an input to be used to control the device or activity data to be used to update one of the metrics maintained by the device. 4. The method of claim 2, wherein only one metric of the plurality of metrics is displayed on the screen of the device at a time. 5. The method of claim 2, further comprising causing the screen to transition from displaying the first metric to displaying the second metric in a specific direction. 6. The method of claim 2, wherein the device is a smartwatch. 7. The method of claim 2, wherein the first metric and the second metric are arranged in a scroll order, wherein the second metric immediately follows the first metric in the scroll order. 8. The method of claim 7, further comprising, for each subsequent physical contact that qualifies as a second type of input to the device, causing a next metric in the scroll order to be displayed on the screen. 9. The method of claim 7, further comprising determining the scroll order based on a user account associated with the device. 10. The method of claim 7, further comprising receiving the scroll order via a wireless connection from a portable computing device configured to communicate with the device. 11. The method of claim 2, wherein the first metric comprises a time metric and the second metric comprises an activity metric indicative of activity data captured over time by the device. 12. A device, comprising: one or more motion sensors configured to generate sensor data; a screen; and one or more processors configured to: receive sensor data corresponding to a first physical contact on the device; determine, based at least on the sensor data, that the first physical contact qualifies as a first type of input; in response to determining that the first physical contact qualifies as a first type of input, cause the screen to display a first metric; determine, based at least on additional sensor data corresponding to a second physical contact on the device that is received subsequent to the first physical contact, that the second physical contact qualifies as a second type of input; and in response to determining that the second physical contact qualifies as a second type of input, cause the screen to display a second metric, wherein the second metric replaces the first metric on the screen. 13. The device of claim 11, further comprising a wrist-attachable structure. 14. The device of claim 12, wherein the one or more motion sensors include an accelerometer. 15. The device of claim 12, further comprising a wireless transceiver configured to transmit the plurality of metrics to a portable computing device. 16. The device of claim 12, wherein at least one of the first metric and the second metric comprises a step count, floors climbed, stairs climbed, distance traveled, active minutes, or calories burned. 17. The device of claim 12, wherein the first type of input is the same as the second type of input. 18. The device of claim 12, wherein the first type of input is different from the second type of input. 19. The device of claim 12, wherein the first type of input is a double tap and the second type of input is a single tap. 20. The device of claim 12, wherein the one or more processors are further configured to: subsequent to the second physical contact, cause the screen to be deactivated; and in response to detecting another input to the device, cause the screen to be reactivated such that the first metric is displayed on the screen. 21. Non-transitory physical computer storage storing instructions, which, when executed by one or more processors, configure the one or more processors to: receive sensor data corresponding to a first physical contact on a device, the sensor data generated by one or more motion sensors of the device; determine, based at least on the sensor data, that the first physical contact qualifies as a first type of input; in response to determining that the first physical contact qualifies as a first type of input, cause a screen of the device to display a first metric; determine, based at least on additional sensor data corresponding to a second physical contact on the device that is received subsequent to the first physical contact, that the second physical contact qualifies as a second type of input; and in response to determining that the second physical contact qualifies as a second type of input, cause the screen of the device to display a second metric, wherein the second metric replaces the first metric on the screen.
2,600
10,836
10,836
16,279,095
2,691
A controlling device has a moveable touch sensitive panel positioned above a plurality of switches. When the controlling device senses an activation of at least one of the plurality of switches when caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface, the controlling device responds by transmitting a signal to an appliance wherein the signal is reflective of the input location upon the touch sensitive surface.
1. A remote control system for remotely controlling one or more devices and/or a user interface, the remote control system comprising: a remote control comprising: a plurality of user input buttons, at least one of the user input buttons configured to receive a user input event and comprising at least one metal dome and a printed circuit board; a plurality of sensors being coupled to the at least one of the user input buttons of the plurality of user input buttons, the plurality of sensors configured to generate sensor data in response to a user input event being received at the at least one of the user input buttons of the plurality of user input buttons; and user input event detection logic configured to receive the sensor data and identify whether the user input event received at the at least one of the user input buttons of the plurality of user input buttons was a click event or a touch event, wherein the user input event detection logic identifies that the user input event is the click event based on receiving sensor data indicating that the at least one metal dome is depressed such that it forms an electrical connection on the printed circuit board. shorts the pair of electrodes of the corresponding user input button; and command selection logic configured to cause a first control command to be executed in response to determining that the user input event received at the at least one of the user input buttons of the plurality of user input buttons was the click event and to cause a second control command to be executed in response to determining that the user input event received at the at least one of the user input buttons of the plurality of user input buttons was the touch event. 2. The remote control system of claim 1, wherein the command selection logic comprises part of the remote control. 3. The remote control system of claim 1, further comprising device control command execution logic, wherein the first and/or second control command is a command for remotely controlling a controlled device, and wherein the device control command execution logic causes the first and/or second control command to be executed at the controlled device. 4. The remote control system of claim 1, wherein the at least one of the user input buttons of the plurality of user input buttons of the plurality of user input buttons is a click pad having a plurality of sensors coupled thereto at corresponding sensor positions of the click pad, the click pad configured to receive a user input event at each of the sensor positions. 5. The remote control system of claim 1, wherein a different control command from a set of control commands is mapped to each of the sensor positions of the click pad for each of at least a click input event and a touch input event, wherein the set of control commands includes the first control command and the second control command. 6. The remote control system of claim 1, further comprising remote control user customization logic, the remote control user customization logic being configured to enable the user to selectively map different control commands of a plurality of control commands to different user input events. 7. The remote control system of claim 1, wherein control commands of the plurality of control commands are executable through interaction with a graphical user interface associated with the remote control displayed on a screen associated with a device in communication with the remote control system.
A controlling device has a moveable touch sensitive panel positioned above a plurality of switches. When the controlling device senses an activation of at least one of the plurality of switches when caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface, the controlling device responds by transmitting a signal to an appliance wherein the signal is reflective of the input location upon the touch sensitive surface.1. A remote control system for remotely controlling one or more devices and/or a user interface, the remote control system comprising: a remote control comprising: a plurality of user input buttons, at least one of the user input buttons configured to receive a user input event and comprising at least one metal dome and a printed circuit board; a plurality of sensors being coupled to the at least one of the user input buttons of the plurality of user input buttons, the plurality of sensors configured to generate sensor data in response to a user input event being received at the at least one of the user input buttons of the plurality of user input buttons; and user input event detection logic configured to receive the sensor data and identify whether the user input event received at the at least one of the user input buttons of the plurality of user input buttons was a click event or a touch event, wherein the user input event detection logic identifies that the user input event is the click event based on receiving sensor data indicating that the at least one metal dome is depressed such that it forms an electrical connection on the printed circuit board. shorts the pair of electrodes of the corresponding user input button; and command selection logic configured to cause a first control command to be executed in response to determining that the user input event received at the at least one of the user input buttons of the plurality of user input buttons was the click event and to cause a second control command to be executed in response to determining that the user input event received at the at least one of the user input buttons of the plurality of user input buttons was the touch event. 2. The remote control system of claim 1, wherein the command selection logic comprises part of the remote control. 3. The remote control system of claim 1, further comprising device control command execution logic, wherein the first and/or second control command is a command for remotely controlling a controlled device, and wherein the device control command execution logic causes the first and/or second control command to be executed at the controlled device. 4. The remote control system of claim 1, wherein the at least one of the user input buttons of the plurality of user input buttons of the plurality of user input buttons is a click pad having a plurality of sensors coupled thereto at corresponding sensor positions of the click pad, the click pad configured to receive a user input event at each of the sensor positions. 5. The remote control system of claim 1, wherein a different control command from a set of control commands is mapped to each of the sensor positions of the click pad for each of at least a click input event and a touch input event, wherein the set of control commands includes the first control command and the second control command. 6. The remote control system of claim 1, further comprising remote control user customization logic, the remote control user customization logic being configured to enable the user to selectively map different control commands of a plurality of control commands to different user input events. 7. The remote control system of claim 1, wherein control commands of the plurality of control commands are executable through interaction with a graphical user interface associated with the remote control displayed on a screen associated with a device in communication with the remote control system.
2,600
10,837
10,837
16,356,820
2,646
The present application discloses a mobile terminal control method, including: receiving a communication request sent by a communication request initiating party, and calculating a time interval between a time when the communication request is received and a time when a previous communication request from the communication request initiating party is received; and if the time interval is greater than a preset threshold, skipping generating a vibrating and/or ringtone alert for the communication request, and detecting an online status of a communications software account associated with the communication request initiating party and sending prompt information to an online communications software account, where the prompt information is used to indicate that a mobile terminal is in a Do Not Disturb mode.
1. A system for processing communication requests, the system comprising: a first mobile phone is configured to: receive a first SMS message from a second mobile phone and reply to the first SMS message, receive a second SMS message from a third mobile phone and not reply to the second SMS message, and activate a Do Not Disturb mode; wherein the second mobile phone is configured to: send a first communication request to the first mobile phone; wherein the third mobile phone is configured to: send a second communication request to the first mobile phone; and wherein the first mobile phone is further configured to: when the Do Not Disturb mode is activated, receive the first communication request and the second communication request, and when the Do Not Disturb mode is activated, determine to automatically send prompt information to the second mobile phone to reply to the first communication request and determine not to automatically send the prompt information to the third mobile phone. 2. The system of claim 1, wherein the prompt information indicates that the Do Not Disturb mode is activated on the first mobile phone. 3. The system of claim 2, wherein the prompt information is locally pre-stored in the first mobile phone. 4. The system of claim 1, wherein the prompt information is sent via an online communications software account. 5. The system of claim 1, wherein the first mobile phone is further configured to: generate an alert indicating that the first communication request is received. 6. The system of claim 1, wherein the first mobile phone is further configured to: detect a status of a communications software account associated with the second mobile phone. 7. The system of claim 4, wherein the prompt information is sent to multiple communications software accounts associated with the second mobile phone when the multiple communications software accounts are online. 8. The system of claim 6, wherein the prompt information is sent to the communications software account when the communications software account is in an online state and has a highest priority. 9. The system of claim 1, wherein the first communication request is a first instant message, and the second communication request is a second instant message. 10. The system of claim 4, wherein the online communications software account is an iMessage account. 11. A system for processing communications between a first mobile phone and a second mobile phone, the system comprising; a first mobile phone configured to: communicate with a second mobile phone, activate a Do Not Disturb mode, and when the Do Not Disturb mode is activated: receive an incoming call from the second mobile phone; determine a time interval between a first time when the incoming call is received and a second time when a previous incoming call from the second mobile phone is received; generate an alert for the incoming call when the time interval is less than a threshold; and automatically send prompt information to a communications software account associated with the second mobile phone when the communications software account is online to reply to a communication request from the second mobile phone, wherein the prompt information indicates the first mobile phone is in the Do Not Disturb mode; and wherein the second mobile phone is configured to: initiate the incoming call received by the first mobile phone, receive the prompt information sent by the first mobile phone, and send the communication request to the first mobile phone. 12. The system of claim 11, wherein the second mobile phone is recorded in the first mobile phone as a contact of the first mobile phone. 13. The system of claim 11, wherein the prompt information indicates a threshold for inviting the second mobile phone to send another communication request to the first mobile phone within the threshold. 14. The system of claim 11, wherein the prompt information is locally pre-stored in the first mobile phone. 15. The system of claim 11, wherein the communication request is an instant message. 16. An electronic device, comprising: a display; one or more processors; and a memory for storing instructions which, when executed by the one or more processors, cause the electronic device to: receive a first SMS message from a first electronic device and reply to the first SMS message, receive a second SMS message from a second electronic device and not reply to the second SMS message, activate a do not disturb mode, and when the do not disturb mode is activated: receive a first communication request from the first electronic device and a second communication request from the second electronic device; and determine to automatically send prompt information to the first electronic device to reply to the first communication request and determine not to automatically send the prompt information to the second electronic device. 17. The electronic device of claim 16, wherein the instructions, when executed by the one or more processors, further cause the electronic device to: detect a status of a communications software account associated with the first electronic device. 18. The electronic device of claim 17, wherein the prompt information is sent to multiple communications software accounts associated with the first electronic device when the multiple communications software accounts are online. 19. The electronic device of claim 16, wherein the prompt information indicates that the do not disturb mode is activated. 20. The electronic device of claim 16, wherein the prompt information is sent via an online communications software account.
The present application discloses a mobile terminal control method, including: receiving a communication request sent by a communication request initiating party, and calculating a time interval between a time when the communication request is received and a time when a previous communication request from the communication request initiating party is received; and if the time interval is greater than a preset threshold, skipping generating a vibrating and/or ringtone alert for the communication request, and detecting an online status of a communications software account associated with the communication request initiating party and sending prompt information to an online communications software account, where the prompt information is used to indicate that a mobile terminal is in a Do Not Disturb mode.1. A system for processing communication requests, the system comprising: a first mobile phone is configured to: receive a first SMS message from a second mobile phone and reply to the first SMS message, receive a second SMS message from a third mobile phone and not reply to the second SMS message, and activate a Do Not Disturb mode; wherein the second mobile phone is configured to: send a first communication request to the first mobile phone; wherein the third mobile phone is configured to: send a second communication request to the first mobile phone; and wherein the first mobile phone is further configured to: when the Do Not Disturb mode is activated, receive the first communication request and the second communication request, and when the Do Not Disturb mode is activated, determine to automatically send prompt information to the second mobile phone to reply to the first communication request and determine not to automatically send the prompt information to the third mobile phone. 2. The system of claim 1, wherein the prompt information indicates that the Do Not Disturb mode is activated on the first mobile phone. 3. The system of claim 2, wherein the prompt information is locally pre-stored in the first mobile phone. 4. The system of claim 1, wherein the prompt information is sent via an online communications software account. 5. The system of claim 1, wherein the first mobile phone is further configured to: generate an alert indicating that the first communication request is received. 6. The system of claim 1, wherein the first mobile phone is further configured to: detect a status of a communications software account associated with the second mobile phone. 7. The system of claim 4, wherein the prompt information is sent to multiple communications software accounts associated with the second mobile phone when the multiple communications software accounts are online. 8. The system of claim 6, wherein the prompt information is sent to the communications software account when the communications software account is in an online state and has a highest priority. 9. The system of claim 1, wherein the first communication request is a first instant message, and the second communication request is a second instant message. 10. The system of claim 4, wherein the online communications software account is an iMessage account. 11. A system for processing communications between a first mobile phone and a second mobile phone, the system comprising; a first mobile phone configured to: communicate with a second mobile phone, activate a Do Not Disturb mode, and when the Do Not Disturb mode is activated: receive an incoming call from the second mobile phone; determine a time interval between a first time when the incoming call is received and a second time when a previous incoming call from the second mobile phone is received; generate an alert for the incoming call when the time interval is less than a threshold; and automatically send prompt information to a communications software account associated with the second mobile phone when the communications software account is online to reply to a communication request from the second mobile phone, wherein the prompt information indicates the first mobile phone is in the Do Not Disturb mode; and wherein the second mobile phone is configured to: initiate the incoming call received by the first mobile phone, receive the prompt information sent by the first mobile phone, and send the communication request to the first mobile phone. 12. The system of claim 11, wherein the second mobile phone is recorded in the first mobile phone as a contact of the first mobile phone. 13. The system of claim 11, wherein the prompt information indicates a threshold for inviting the second mobile phone to send another communication request to the first mobile phone within the threshold. 14. The system of claim 11, wherein the prompt information is locally pre-stored in the first mobile phone. 15. The system of claim 11, wherein the communication request is an instant message. 16. An electronic device, comprising: a display; one or more processors; and a memory for storing instructions which, when executed by the one or more processors, cause the electronic device to: receive a first SMS message from a first electronic device and reply to the first SMS message, receive a second SMS message from a second electronic device and not reply to the second SMS message, activate a do not disturb mode, and when the do not disturb mode is activated: receive a first communication request from the first electronic device and a second communication request from the second electronic device; and determine to automatically send prompt information to the first electronic device to reply to the first communication request and determine not to automatically send the prompt information to the second electronic device. 17. The electronic device of claim 16, wherein the instructions, when executed by the one or more processors, further cause the electronic device to: detect a status of a communications software account associated with the first electronic device. 18. The electronic device of claim 17, wherein the prompt information is sent to multiple communications software accounts associated with the first electronic device when the multiple communications software accounts are online. 19. The electronic device of claim 16, wherein the prompt information indicates that the do not disturb mode is activated. 20. The electronic device of claim 16, wherein the prompt information is sent via an online communications software account.
2,600
10,838
10,838
15,534,745
2,683
The present system generates output signals conveying information related to a position of one or more body parts of the subject, location of the body of the subject and/or physiological information related to the subject; obtains a set of fall criteria that describe whether the subject is likely to fall; determines one or more body position parameters, one or more physiological parameters, and/or one or more body location parameters; compares the determined one or more physiological parameters, the one or more body position parameters, and/or the one or more body location parameters to criteria in the set of fall criteria; and, responsive to the one or more body position parameters, the one or more physiological parameters, and or the one or more body location parameters satisfying individual fall criteria in the set of fall criteria, generate an alert.
1. A system that generates a potential fall alert for a subject in a support structure, the system comprising: one or more body position sensors that generate output signals conveying information related to a position of one or more body parts of the subject; one or more physiological sensors that generate output signals conveying physiological information related to the subject; and one or more physical computer processors comprising computer-readable instructions to: obtain a set of fall criteria that describe whether the subject is likely to fall; determine one or more body position parameters, and one or more physiological parameters based on the output signals generated by the one or more body position sensors, and the one or more physiological sensors; compare the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria; and responsive to the one or more body position parameters, and the one or more physiological parameters satisfying the fall criteria in the set of fall criteria, generate an alert. 2. The system of claim 1, wherein the one or more body position sensors, and/or the one or more physiological sensors comprise at least one of: a motion sensor, a position sensor, a weight sensor, an optical sensor, a photoelectric sensor, a voice sensor, a sound sensor, a heart rate sensor, a blood pressure sensor, a respiration sensor, and a pulse sensor. 3. The system of claim 1, wherein the one or more physical computer processors are further configured to obtain initial baseline parameters related to the subject, the initial baseline parameters including baseline physiological parameters and baseline body position parameters, and wherein comparing the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria includes comparing the initial baseline parameters to the determined one or more physiological parameters, and the determined one or more body position parameters. 4. The system of claim 3, wherein the one or more physical computer processors are configured to determine the set of fall criteria based on the output signals and/or the initial baseline parameters. 5. The system of claim 1, further comprising one or more body location sensors that generate output signals conveying information related to location of the body of the subject, and wherein the one or more physical computer processors further: determine the one or more body position parameters, the one or more physiological parameters, and one or more body location parameters based on the output signals generated by the one or more body position sensors, the one or more physiological sensors, and the one or more body location sensors; compare the determined one or more physiological parameters, the one or more body position parameters, and the one or more body location parameters to criteria in the set of fall criteria, and responsive to the one or more body position parameters, the one or more physiological parameters, and the one or more body location parameters satisfying the fall criteria in the set of fall criteria, generate an alert. 6. A method for generating a potential fall alert for a subject in a support structure with a system comprising one or more body position sensors, one or more physiological sensors, and one or more physical computer processors, the method comprising: generating, with the one or more body position sensors, output signals conveying information related to a position of one or more body parts of the subject; generating, with the one or more physiological sensors, output signals conveying physiological information related to the subject; obtaining, with the one or more physical computer processors, a set of fall criteria that describe whether the subject is likely to fall; determining, with the one or more physical computer processors, one or more body position parameters, and one or more physiological parameters based on the output signals generated by the one or more body position sensors, and the one or more physiological sensors; comparing, with the one or more physical computer processors, the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria; and responsive to the one or more body position parameters, and the one or more physiological parameters satisfying the fall criteria in the set of fall criteria, generating an alert with the one or more physical computer processors. 7. The method of claim 6, wherein the one or more body position sensors, and/or the one or more physiological sensors comprise at least one of: a motion sensor, a position sensor, a weight sensor, an optical sensor, a photoelectric sensor, a voice sensor, a sound sensor, a heart rate sensor, a blood pressure sensor, a respiration sensor, or a pulse sensor. 8. The method of claim 6, further comprising obtaining, with the one or more physical computer processors, initial baseline parameters related to the subject, the initial baseline parameters including baseline physiological parameters and baseline body position parameters, and wherein comparing the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria includes comparing the initial baseline parameters to the determined one or more physiological parameters, and the determined one or more body position parameters. 9. The method of claim 8, further comprising determining, with the one or more physical computer processors, the set of fall criteria based on the output signals and/or the initial baseline parameters. 10. The method of claim 6, further comprising generating, with one or more body location sensors, output signals conveying information related to location of the body of the subject, and determining, with the one or more physical computer processors, the one or more body position parameters, the one or more physiological parameters, and one or more body location parameters based on the output signals generated by the one or more body position sensors, the one or more physiological sensors, and the one or more body location sensors; comparing, with the one or more physical computer processors, the determined one or more physiological parameters, the one or more body position parameters, and the one or more body location parameters to criteria in the set of fall criteria, and responsive to the one or more body position parameters, the one or more physiological parameters, and the one or more body location parameters satisfying the fall criteria in the set of fall criteria, generating an alert with the one or more physical computer processors. 11. A system to generate a potential fall alert for a subject in a support structure, the system comprising: means for generating output signals conveying information related to a position of one or more body parts of the subject; means for generating output signals conveying physiological information related to the subject; means for obtaining a set of fall criteria that describe whether the subject is likely to fall; means for determining one or more body position parameters, and one or more physiological parameters based on the output signals; means for comparing the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria; and means for generating an alert responsive to the one or more body position parameters, and the one or more physiological parameters satisfying the fall criteria in the set of fall criteria. 12. The system of claim 11, wherein the means for generating output signals conveying information related to a position of one or more body parts of the subject, and the means for generating output signals conveying physiological information related to the subject comprises at least one of, a motion sensor, a position sensor, a weight sensor, an optical sensor, a photoelectric sensor, a voice sensor, a sound sensor, a heart rate sensor, a blood pressure sensor, a respiration sensor, or a pulse sensor. 13. The system of claim 11, further comprising means for obtaining initial baseline parameters related to the subject, the initial baseline parameters including baseline physiological parameters and baseline body position parameters, and wherein comparing the determined one or more physiological parameters, and the one or more body position parameters to fall criteria in the set of fall criteria includes comparing the initial baseline parameters to the determined one or more physiological parameters, and the determined one or more body position parameters. 14. The system of claim 13, further comprising means for determining the set of fall criteria based on the output signals and/or the initial baseline parameters. 15. The system of claim 11, further comprising means for generating output signals conveying information related to location of the body of the subject, means for determining the one or more body position parameters, the one or more physiological parameters, and one or more body location parameters based on the output signals; means for comparing the determined one or more physiological parameters, the one or more body position parameters, and the one or more body location parameters to fall criteria in the set of fall criteria, and means for generating an alert responsive to the one or more body position parameters, the one or more physiological parameters, and the one or more body location parameters satisfying the fall criteria in the set of fall criteria.
The present system generates output signals conveying information related to a position of one or more body parts of the subject, location of the body of the subject and/or physiological information related to the subject; obtains a set of fall criteria that describe whether the subject is likely to fall; determines one or more body position parameters, one or more physiological parameters, and/or one or more body location parameters; compares the determined one or more physiological parameters, the one or more body position parameters, and/or the one or more body location parameters to criteria in the set of fall criteria; and, responsive to the one or more body position parameters, the one or more physiological parameters, and or the one or more body location parameters satisfying individual fall criteria in the set of fall criteria, generate an alert.1. A system that generates a potential fall alert for a subject in a support structure, the system comprising: one or more body position sensors that generate output signals conveying information related to a position of one or more body parts of the subject; one or more physiological sensors that generate output signals conveying physiological information related to the subject; and one or more physical computer processors comprising computer-readable instructions to: obtain a set of fall criteria that describe whether the subject is likely to fall; determine one or more body position parameters, and one or more physiological parameters based on the output signals generated by the one or more body position sensors, and the one or more physiological sensors; compare the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria; and responsive to the one or more body position parameters, and the one or more physiological parameters satisfying the fall criteria in the set of fall criteria, generate an alert. 2. The system of claim 1, wherein the one or more body position sensors, and/or the one or more physiological sensors comprise at least one of: a motion sensor, a position sensor, a weight sensor, an optical sensor, a photoelectric sensor, a voice sensor, a sound sensor, a heart rate sensor, a blood pressure sensor, a respiration sensor, and a pulse sensor. 3. The system of claim 1, wherein the one or more physical computer processors are further configured to obtain initial baseline parameters related to the subject, the initial baseline parameters including baseline physiological parameters and baseline body position parameters, and wherein comparing the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria includes comparing the initial baseline parameters to the determined one or more physiological parameters, and the determined one or more body position parameters. 4. The system of claim 3, wherein the one or more physical computer processors are configured to determine the set of fall criteria based on the output signals and/or the initial baseline parameters. 5. The system of claim 1, further comprising one or more body location sensors that generate output signals conveying information related to location of the body of the subject, and wherein the one or more physical computer processors further: determine the one or more body position parameters, the one or more physiological parameters, and one or more body location parameters based on the output signals generated by the one or more body position sensors, the one or more physiological sensors, and the one or more body location sensors; compare the determined one or more physiological parameters, the one or more body position parameters, and the one or more body location parameters to criteria in the set of fall criteria, and responsive to the one or more body position parameters, the one or more physiological parameters, and the one or more body location parameters satisfying the fall criteria in the set of fall criteria, generate an alert. 6. A method for generating a potential fall alert for a subject in a support structure with a system comprising one or more body position sensors, one or more physiological sensors, and one or more physical computer processors, the method comprising: generating, with the one or more body position sensors, output signals conveying information related to a position of one or more body parts of the subject; generating, with the one or more physiological sensors, output signals conveying physiological information related to the subject; obtaining, with the one or more physical computer processors, a set of fall criteria that describe whether the subject is likely to fall; determining, with the one or more physical computer processors, one or more body position parameters, and one or more physiological parameters based on the output signals generated by the one or more body position sensors, and the one or more physiological sensors; comparing, with the one or more physical computer processors, the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria; and responsive to the one or more body position parameters, and the one or more physiological parameters satisfying the fall criteria in the set of fall criteria, generating an alert with the one or more physical computer processors. 7. The method of claim 6, wherein the one or more body position sensors, and/or the one or more physiological sensors comprise at least one of: a motion sensor, a position sensor, a weight sensor, an optical sensor, a photoelectric sensor, a voice sensor, a sound sensor, a heart rate sensor, a blood pressure sensor, a respiration sensor, or a pulse sensor. 8. The method of claim 6, further comprising obtaining, with the one or more physical computer processors, initial baseline parameters related to the subject, the initial baseline parameters including baseline physiological parameters and baseline body position parameters, and wherein comparing the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria includes comparing the initial baseline parameters to the determined one or more physiological parameters, and the determined one or more body position parameters. 9. The method of claim 8, further comprising determining, with the one or more physical computer processors, the set of fall criteria based on the output signals and/or the initial baseline parameters. 10. The method of claim 6, further comprising generating, with one or more body location sensors, output signals conveying information related to location of the body of the subject, and determining, with the one or more physical computer processors, the one or more body position parameters, the one or more physiological parameters, and one or more body location parameters based on the output signals generated by the one or more body position sensors, the one or more physiological sensors, and the one or more body location sensors; comparing, with the one or more physical computer processors, the determined one or more physiological parameters, the one or more body position parameters, and the one or more body location parameters to criteria in the set of fall criteria, and responsive to the one or more body position parameters, the one or more physiological parameters, and the one or more body location parameters satisfying the fall criteria in the set of fall criteria, generating an alert with the one or more physical computer processors. 11. A system to generate a potential fall alert for a subject in a support structure, the system comprising: means for generating output signals conveying information related to a position of one or more body parts of the subject; means for generating output signals conveying physiological information related to the subject; means for obtaining a set of fall criteria that describe whether the subject is likely to fall; means for determining one or more body position parameters, and one or more physiological parameters based on the output signals; means for comparing the determined one or more physiological parameters, and the one or more body position parameters to criteria in the set of fall criteria; and means for generating an alert responsive to the one or more body position parameters, and the one or more physiological parameters satisfying the fall criteria in the set of fall criteria. 12. The system of claim 11, wherein the means for generating output signals conveying information related to a position of one or more body parts of the subject, and the means for generating output signals conveying physiological information related to the subject comprises at least one of, a motion sensor, a position sensor, a weight sensor, an optical sensor, a photoelectric sensor, a voice sensor, a sound sensor, a heart rate sensor, a blood pressure sensor, a respiration sensor, or a pulse sensor. 13. The system of claim 11, further comprising means for obtaining initial baseline parameters related to the subject, the initial baseline parameters including baseline physiological parameters and baseline body position parameters, and wherein comparing the determined one or more physiological parameters, and the one or more body position parameters to fall criteria in the set of fall criteria includes comparing the initial baseline parameters to the determined one or more physiological parameters, and the determined one or more body position parameters. 14. The system of claim 13, further comprising means for determining the set of fall criteria based on the output signals and/or the initial baseline parameters. 15. The system of claim 11, further comprising means for generating output signals conveying information related to location of the body of the subject, means for determining the one or more body position parameters, the one or more physiological parameters, and one or more body location parameters based on the output signals; means for comparing the determined one or more physiological parameters, the one or more body position parameters, and the one or more body location parameters to fall criteria in the set of fall criteria, and means for generating an alert responsive to the one or more body position parameters, the one or more physiological parameters, and the one or more body location parameters satisfying the fall criteria in the set of fall criteria.
2,600
10,839
10,839
16,264,115
2,651
In general, techniques are described by which to support scalable unified audio rendering. A device comprising an audio decoder, a memory, and a processor may be configured to perform various aspects of the techniques. The audio decoder may decode, from a bitstream, first audio data and second audio data. The memory may store the first audio data and the second audio data. The processor may render the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations, and render the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations. The processor may also mix the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data, and convert the mixed spatial domain audio data to scene-based audio data.
1. A device configured to support unified audio rendering, the device comprising: an audio decoder configured to decode, from a bitstream, first audio data for a time frame and second audio data for the time frame; a memory configured to store the first audio data and the second audio data; and one or more processors configured to: render the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; render the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; mix the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and convert the mixed spatial domain audio data to scene-based audio data. 2. The device of claim 1, wherein the one or more processors are further configured to determine, based on headset capability data representative of one or more capabilities of a headset and prior to rendering the first audio data and the second audio data, the set of virtual speaker locations at which the virtual speakers are located. 3. The device of claim 1, wherein the first audio data comprises one of first scene-based audio data, first channel-based audio data, or first object-based audio data, and wherein the second audio data comprises one of second scene-based audio data, second channel-based audio data, or second object-based audio data. 4. The device of claim 1, wherein the one or more processors are configured to transform the mixed spatial domain audio data from the spatial domain to a spherical harmonic domain, and wherein the scene-based audio data comprises higher order ambisonic audio data defined in the spherical harmonic domain as a set of one or more higher order ambisonic coefficients corresponding to spherical basis functions. 5. The device of claim 1, wherein the set of virtual speaker locations comprises a set of virtual speaker locations uniformly distributed about a sphere in which a head of a listener is positioned at a center of the sphere. 6. The device of claim 1, wherein the set of virtual speaker locations include Fliege points. 7. The device of claim 1, wherein the one or more processors are configured to render, based on headset-captured audio data, the first audio data to obtain the first spatial domain audio data, wherein the headset-captured audio data comprises audio data representing sounds detected by a headset, and wherein the one or more processors are configured to render, based on the headset-captured audio data, the second audio data to obtain the second spatial domain audio data. 8. The device of claim 1, further comprising an interface configured to transmit, to a headset, the scene-based audio data and data indicating the set of virtual speaker locations. 9. The device of claim 8, wherein the headset comprises a wireless headset. 10. The device of claim 8, wherein the headset comprises a computer mediated reality headset that supports one or more of virtual reality, augmented reality, and mixed reality. 11. The device of claim 1, wherein the one or more audio decoder is further configured to decode, from the bitstream, third audio data for the time frame, wherein the memory is further configured to store the third audio data, wherein the one or more processors are further configured to render the third audio data into third spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations, and wherein the one or more processors are configured to mix the first spatial domain audio data, the second spatial domain audio data, and the third spatial domain audio data to obtain the mixed spatial domain audio data. 12. A method of supporting unified audio rendering, the method comprising: decoding, by a computing device and from a bitstream, first audio data for a time frame and second audio data for the time frame; rendering, by the computing device, the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; rendering, by the computing device, the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; mixing, by the computing device, the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and converting, by the computing device, the mixed spatial domain audio data to scene-based audio data. 13. The method of claim 12, further comprising determining, based on headset capability data representative of one or more capabilities of a headset and prior to rendering the first audio data and the second audio data, the set of virtual speaker locations at which the virtual speakers are located. 14. The method of claim 12, wherein the first audio data comprises one of first scene-based audio data, first channel-based audio data, or first object-based audio data, and wherein the second audio data comprises one of second scene-based audio data, second channel-based audio data, or second object-based audio data. 15. The method of claim 12, wherein converting the mixed spatial domain audio data comprises transforming the mixed spatial domain audio data from the spatial domain to a spherical harmonic domain, and wherein the scene-based audio data comprises higher order ambisonic audio data defined in the spherical harmonic domain as a set of one or more higher order ambisonic coefficients corresponding to spherical basis functions. 16. The method of claim 12, wherein the set of virtual speaker locations comprises a set of virtual speaker locations uniformly distributed about a sphere in which a head of a listener is positioned at a center of the sphere. 17. The method of claim 12, wherein the set of virtual speaker locations include Fliege points. 18. The method of claim 12, wherein rendering the first audio data comprises rendering, based on headset-captured audio data, the first audio data to obtain the first spatial domain audio data, wherein the headset-captured audio data comprises audio data representing sounds detected by a headset, and wherein rendering the second audio data comprises rendering, based on the headset-captured audio data, the second audio data to obtain the second spatial domain audio data. 19. The method of claim 12, further comprising transmitting, to a headset, the scene-based audio data and data indicating the set of virtual speaker locations. 20. The method of claim 19, wherein the headset comprises a wireless headset. 21. The method of claim 19, wherein the headset comprises a computer mediated reality headset that supports one or more of virtual reality, augmented reality, and mixed reality. 22. The method of claim 12, further comprising: decoding, from the bitstream, third audio data for the time frame; and rendering the third audio data into third spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations, wherein mixing the first spatial domain audio data and the second domain audio data comprises mixing the first spatial domain audio data, the second spatial domain audio data, and the third spatial domain audio data to obtain the mixed spatial domain audio data. 23. A device configured to support unified audio rendering, the device comprising: means for decoding, from a bitstream, first audio data for a time frame and second audio data for the time frame; means for rendering the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; means for rendering the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; means for mixing the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and means for converting the mixed spatial domain audio data to scene-based audio data. 24. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to: decode, from a bitstream, first audio data for a time frame and second audio data for the time frame; render the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; render the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; mix the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and convert the mixed spatial domain audio data to scene-based audio data.
In general, techniques are described by which to support scalable unified audio rendering. A device comprising an audio decoder, a memory, and a processor may be configured to perform various aspects of the techniques. The audio decoder may decode, from a bitstream, first audio data and second audio data. The memory may store the first audio data and the second audio data. The processor may render the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations, and render the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations. The processor may also mix the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data, and convert the mixed spatial domain audio data to scene-based audio data.1. A device configured to support unified audio rendering, the device comprising: an audio decoder configured to decode, from a bitstream, first audio data for a time frame and second audio data for the time frame; a memory configured to store the first audio data and the second audio data; and one or more processors configured to: render the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; render the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; mix the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and convert the mixed spatial domain audio data to scene-based audio data. 2. The device of claim 1, wherein the one or more processors are further configured to determine, based on headset capability data representative of one or more capabilities of a headset and prior to rendering the first audio data and the second audio data, the set of virtual speaker locations at which the virtual speakers are located. 3. The device of claim 1, wherein the first audio data comprises one of first scene-based audio data, first channel-based audio data, or first object-based audio data, and wherein the second audio data comprises one of second scene-based audio data, second channel-based audio data, or second object-based audio data. 4. The device of claim 1, wherein the one or more processors are configured to transform the mixed spatial domain audio data from the spatial domain to a spherical harmonic domain, and wherein the scene-based audio data comprises higher order ambisonic audio data defined in the spherical harmonic domain as a set of one or more higher order ambisonic coefficients corresponding to spherical basis functions. 5. The device of claim 1, wherein the set of virtual speaker locations comprises a set of virtual speaker locations uniformly distributed about a sphere in which a head of a listener is positioned at a center of the sphere. 6. The device of claim 1, wherein the set of virtual speaker locations include Fliege points. 7. The device of claim 1, wherein the one or more processors are configured to render, based on headset-captured audio data, the first audio data to obtain the first spatial domain audio data, wherein the headset-captured audio data comprises audio data representing sounds detected by a headset, and wherein the one or more processors are configured to render, based on the headset-captured audio data, the second audio data to obtain the second spatial domain audio data. 8. The device of claim 1, further comprising an interface configured to transmit, to a headset, the scene-based audio data and data indicating the set of virtual speaker locations. 9. The device of claim 8, wherein the headset comprises a wireless headset. 10. The device of claim 8, wherein the headset comprises a computer mediated reality headset that supports one or more of virtual reality, augmented reality, and mixed reality. 11. The device of claim 1, wherein the one or more audio decoder is further configured to decode, from the bitstream, third audio data for the time frame, wherein the memory is further configured to store the third audio data, wherein the one or more processors are further configured to render the third audio data into third spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations, and wherein the one or more processors are configured to mix the first spatial domain audio data, the second spatial domain audio data, and the third spatial domain audio data to obtain the mixed spatial domain audio data. 12. A method of supporting unified audio rendering, the method comprising: decoding, by a computing device and from a bitstream, first audio data for a time frame and second audio data for the time frame; rendering, by the computing device, the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; rendering, by the computing device, the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; mixing, by the computing device, the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and converting, by the computing device, the mixed spatial domain audio data to scene-based audio data. 13. The method of claim 12, further comprising determining, based on headset capability data representative of one or more capabilities of a headset and prior to rendering the first audio data and the second audio data, the set of virtual speaker locations at which the virtual speakers are located. 14. The method of claim 12, wherein the first audio data comprises one of first scene-based audio data, first channel-based audio data, or first object-based audio data, and wherein the second audio data comprises one of second scene-based audio data, second channel-based audio data, or second object-based audio data. 15. The method of claim 12, wherein converting the mixed spatial domain audio data comprises transforming the mixed spatial domain audio data from the spatial domain to a spherical harmonic domain, and wherein the scene-based audio data comprises higher order ambisonic audio data defined in the spherical harmonic domain as a set of one or more higher order ambisonic coefficients corresponding to spherical basis functions. 16. The method of claim 12, wherein the set of virtual speaker locations comprises a set of virtual speaker locations uniformly distributed about a sphere in which a head of a listener is positioned at a center of the sphere. 17. The method of claim 12, wherein the set of virtual speaker locations include Fliege points. 18. The method of claim 12, wherein rendering the first audio data comprises rendering, based on headset-captured audio data, the first audio data to obtain the first spatial domain audio data, wherein the headset-captured audio data comprises audio data representing sounds detected by a headset, and wherein rendering the second audio data comprises rendering, based on the headset-captured audio data, the second audio data to obtain the second spatial domain audio data. 19. The method of claim 12, further comprising transmitting, to a headset, the scene-based audio data and data indicating the set of virtual speaker locations. 20. The method of claim 19, wherein the headset comprises a wireless headset. 21. The method of claim 19, wherein the headset comprises a computer mediated reality headset that supports one or more of virtual reality, augmented reality, and mixed reality. 22. The method of claim 12, further comprising: decoding, from the bitstream, third audio data for the time frame; and rendering the third audio data into third spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations, wherein mixing the first spatial domain audio data and the second domain audio data comprises mixing the first spatial domain audio data, the second spatial domain audio data, and the third spatial domain audio data to obtain the mixed spatial domain audio data. 23. A device configured to support unified audio rendering, the device comprising: means for decoding, from a bitstream, first audio data for a time frame and second audio data for the time frame; means for rendering the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; means for rendering the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; means for mixing the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and means for converting the mixed spatial domain audio data to scene-based audio data. 24. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed, cause one or more processors to: decode, from a bitstream, first audio data for a time frame and second audio data for the time frame; render the first audio data into first spatial domain audio data for playback by virtual speakers at a set of virtual speaker locations; render the second audio data into second spatial domain audio data for playback by the virtual speakers at the set of virtual speaker locations; mix the first spatial domain audio data and the second spatial domain audio data to obtain mixed spatial domain audio data; and convert the mixed spatial domain audio data to scene-based audio data.
2,600
10,840
10,840
16,427,163
2,643
Methods and apparatuses are described in which an unlicensed spectrum is used for Long Term Evolution (LTE) communications. One method includes performing a clear channel assessment (CCA) for an unlicensed spectrum in a current gating interval to determine whether the unlicensed spectrum is available for a transmission in a next transmission interval, and gating OFF the transmission in the unlicensed spectrum for the next transmission interval when the determination is that the unlicensed spectrum is unavailable.
1. A method for wireless communications, comprising: performing a clear channel assessment (CCA) for an unlicensed spectrum in a current gating interval to determine whether the unlicensed spectrum is available for a transmission in a next transmission interval; and gating OFF the transmission in the unlicensed spectrum for the next transmission interval when the determination is that the unlicensed spectrum is unavailable.
Methods and apparatuses are described in which an unlicensed spectrum is used for Long Term Evolution (LTE) communications. One method includes performing a clear channel assessment (CCA) for an unlicensed spectrum in a current gating interval to determine whether the unlicensed spectrum is available for a transmission in a next transmission interval, and gating OFF the transmission in the unlicensed spectrum for the next transmission interval when the determination is that the unlicensed spectrum is unavailable.1. A method for wireless communications, comprising: performing a clear channel assessment (CCA) for an unlicensed spectrum in a current gating interval to determine whether the unlicensed spectrum is available for a transmission in a next transmission interval; and gating OFF the transmission in the unlicensed spectrum for the next transmission interval when the determination is that the unlicensed spectrum is unavailable.
2,600
10,841
10,841
16,199,500
2,643
A system for and a method of providing service related to an object include receiving property information of the object when the device is located within a predetermined distance from the object, requesting available service from a server based on a current location of the device receiving the property information and the received property information, and receiving the requested service from the server.
1. A method by which an object provides information with respect to a service to a device, the method comprising: forming a communication network including the device and the object, when the device is located within a predetermined range from the object, transmitting property information of the object to the device via the communication network, receiving, from the device, context information of the device which is related to the property information of the object, based on the received context information of the device, determining an intention of a user of the device; transmitting, to the device, information with respect to a service corresponding to the determined intention of the user. 2. The method of claim 1, wherein the transmitting information with respect to the service comprises transmitting link information of a server for receiving the service corresponding to the determined intention of the user. 3. The method of claim 1, further comprising: determining a plurality of types of services corresponding to the determined intention of the user; transmitting, to the device, the determined plurality of types of services; receiving type of service which is selected based on a user input via the device, and transmitting, to the device, link information of a server for receiving the service according to the selected type of the service. 4. The method of claim 3, further comprising determining types of specific information to be provided by the determined types of services. 5. The method of claim 1, wherein the receiving the context information of the device comprises receiving the context information related to an application being executed by the device. 6. The method of claim 1, wherein the receiving the context information of the device comprises receiving the context information related to an execution history of applications executed by the device. 7. The method of claim 1, wherein the receiving the context information of the device comprises receiving the context information related to information regarding a user of the device, the information regarding the user of the device comprises at least one of gender of the user and job of the user. 8. An object which provides information with respect to a service to a device, the object comprising: a controller configured to: form a communication network including the device and the object, when the device is located within a predetermined range from the object, transmit property information of the object to the device via the communication network, receive, from the device, context information of the device which is related to the property information of the object, based on the received context information of the device, determine an intention of a user, and transmit, to the device, information with respect to a service corresponding to the determined intention of the user. 9. A method by which a device receives information with respect to a service from a object, the method comprising: forming a communication network including the device and a object, when the object is located within a predetermined range from the device; receiving property information of the object from the object via the communication network; transmitting, to the object, context information of the device which is related to the property information of the object; and receiving, from the object, information with respect to a service corresponding to intention of the user of the device; wherein the intention of the user is determined by the object based on the context information of the device. 10. The method of claim 9, wherein the receiving information with respect to the service comprises receiving link information of a server for receiving the service corresponding to the determined intention of the user. 11. The method of claim 9, further comprising: receiving, from the object, plurality of types of services corresponding to the determined intention of the user; selecting, based on a user input via the device, type of service from among the plurality of types of services; transmitting, to the object, the selected type of service; and receiving, from the device, link information of a server for receiving the service according to the selected type of the service. 12. The method of claim 9, wherein the transmitting the context information of the device comprises transmitting the context information related to an application being executed by the device. 13. The method of claim 9, wherein the transmitting the context information of the device comprises transmitting the context information related to an execution history of applications executed by the device. 14. The method of claim 9, wherein the transmitting the context information of the device comprises transmitting the context information related to information regarding a user of the device, the information regarding the user of the device comprises at least one of gender of the user and job of the user. 15. A device which receives information with respect to a service from a object, the device comprising: a controller configured to: form a communication network including the device and a object, when the object is located within a predetermined range from the device; receive property information of the object from the object via the communication network; transmit, to the object, context information of the device which is related to the property information of the object; and receive, from the object, information with respect to a service corresponding to intention of the user of the device; wherein the intention of the user is determined by the object based on the context information of the device.
A system for and a method of providing service related to an object include receiving property information of the object when the device is located within a predetermined distance from the object, requesting available service from a server based on a current location of the device receiving the property information and the received property information, and receiving the requested service from the server.1. A method by which an object provides information with respect to a service to a device, the method comprising: forming a communication network including the device and the object, when the device is located within a predetermined range from the object, transmitting property information of the object to the device via the communication network, receiving, from the device, context information of the device which is related to the property information of the object, based on the received context information of the device, determining an intention of a user of the device; transmitting, to the device, information with respect to a service corresponding to the determined intention of the user. 2. The method of claim 1, wherein the transmitting information with respect to the service comprises transmitting link information of a server for receiving the service corresponding to the determined intention of the user. 3. The method of claim 1, further comprising: determining a plurality of types of services corresponding to the determined intention of the user; transmitting, to the device, the determined plurality of types of services; receiving type of service which is selected based on a user input via the device, and transmitting, to the device, link information of a server for receiving the service according to the selected type of the service. 4. The method of claim 3, further comprising determining types of specific information to be provided by the determined types of services. 5. The method of claim 1, wherein the receiving the context information of the device comprises receiving the context information related to an application being executed by the device. 6. The method of claim 1, wherein the receiving the context information of the device comprises receiving the context information related to an execution history of applications executed by the device. 7. The method of claim 1, wherein the receiving the context information of the device comprises receiving the context information related to information regarding a user of the device, the information regarding the user of the device comprises at least one of gender of the user and job of the user. 8. An object which provides information with respect to a service to a device, the object comprising: a controller configured to: form a communication network including the device and the object, when the device is located within a predetermined range from the object, transmit property information of the object to the device via the communication network, receive, from the device, context information of the device which is related to the property information of the object, based on the received context information of the device, determine an intention of a user, and transmit, to the device, information with respect to a service corresponding to the determined intention of the user. 9. A method by which a device receives information with respect to a service from a object, the method comprising: forming a communication network including the device and a object, when the object is located within a predetermined range from the device; receiving property information of the object from the object via the communication network; transmitting, to the object, context information of the device which is related to the property information of the object; and receiving, from the object, information with respect to a service corresponding to intention of the user of the device; wherein the intention of the user is determined by the object based on the context information of the device. 10. The method of claim 9, wherein the receiving information with respect to the service comprises receiving link information of a server for receiving the service corresponding to the determined intention of the user. 11. The method of claim 9, further comprising: receiving, from the object, plurality of types of services corresponding to the determined intention of the user; selecting, based on a user input via the device, type of service from among the plurality of types of services; transmitting, to the object, the selected type of service; and receiving, from the device, link information of a server for receiving the service according to the selected type of the service. 12. The method of claim 9, wherein the transmitting the context information of the device comprises transmitting the context information related to an application being executed by the device. 13. The method of claim 9, wherein the transmitting the context information of the device comprises transmitting the context information related to an execution history of applications executed by the device. 14. The method of claim 9, wherein the transmitting the context information of the device comprises transmitting the context information related to information regarding a user of the device, the information regarding the user of the device comprises at least one of gender of the user and job of the user. 15. A device which receives information with respect to a service from a object, the device comprising: a controller configured to: form a communication network including the device and a object, when the object is located within a predetermined range from the device; receive property information of the object from the object via the communication network; transmit, to the object, context information of the device which is related to the property information of the object; and receive, from the object, information with respect to a service corresponding to intention of the user of the device; wherein the intention of the user is determined by the object based on the context information of the device.
2,600
10,842
10,842
16,229,172
2,677
A processor automatically determines the machine/job state of a document processing apparatus. The job/status data is automatically displayed on a first display portion of a multi-part screen of the document processing apparatus. A graphic item that is indicative of the machine/job state is only displayed on a second display portion of the multi-part screen of the document processing apparatus. The graphic item indicates the machine/job state of the document processing apparatus using facial expressions, engineering and/or math icons, etc.
1. An apparatus comprising: a document processing apparatus; a user interface connected to an exterior of the document processing apparatus, wherein the user interface includes a screen having at least two planar display portions that lie in different planes; and a processor operatively connected to the screen, wherein a first display portion of the at least two planar display portions is positioned on a top of the document processing apparatus and a second display portion of the at least two planar display portions is positioned on a side of the document processing apparatus, wherein the side is approximately perpendicular to the top, and wherein the processor is adapted to automatically cause the first display portion to display alphanumeric characters of a machine/job state of the document processing apparatus, and automatically cause the second display portion to only display a graphic item lacking alphanumeric characters indicative of the machine/job state while the first display portion displays the alphanumeric characters of the machine/job state. 2. The apparatus according to claim 1, wherein the graphic item indicates the machine/job state of the document processing apparatus using facial expressions. 3. The apparatus according to claim 1, wherein the graphic item indicates the machine/job state of the document processing apparatus using engineering and math icons. 4. The apparatus according to claim 1, wherein the machine/job state of the document processing apparatus comprises error conditions, warning conditions, active processing conditions, and processing complete conditions. 5. The apparatus according to claim 1, wherein the document processing apparatus has a bottom adjacent a surface upon which the document processing apparatus rests, wherein the top is opposite the bottom, and wherein the side is between the top and the bottom. 6. The apparatus according to claim 1, wherein the at least two planar display portions are non-parallel to one another. 7. The apparatus according to claim 1, wherein the at least two planar display portions are different sizes. 8. An apparatus comprising: a document processing apparatus; a curved user interface connected to at least two non-parallel surfaces of an exterior of the document processing apparatus, wherein the curved user interface includes a screen having at least two planar display portions; and a processor operatively connected to the screen, wherein a first display portion of the at least two planar display portions is positioned on a top of the document processing apparatus and a second display portion of the at least two planar display portions is positioned on a side of the document processing apparatus, wherein the side is approximately perpendicular to the top, and wherein the processor is adapted to automatically cause the first display portion to display alphanumeric characters of a machine/job state of the document processing apparatus, and automatically cause the second display portion to only display a graphic item lacking alphanumeric characters indicative of the machine/job state while the first display portion displays the alphanumeric characters of the machine/job state. 9. The apparatus according to claim 8, wherein the graphic item indicates the machine/job state of the document processing apparatus using facial expressions. 10. The apparatus according to claim 8, wherein the graphic item indicates the machine/job state of the document processing apparatus using engineering and math icons. 11. The apparatus according to claim 8, wherein the machine/job state of the document processing apparatus comprises error conditions, warning conditions, active processing conditions, and processing complete conditions. 12. The apparatus according to claim 8, wherein the document processing apparatus has a bottom adjacent a surface upon which the document processing apparatus rests, wherein the top is opposite the bottom, and sides between the top and the bottom and approximately perpendicular to the top and the bottom, wherein a first display portion at least two planar display portions is positioned on the top and a second display portion at least two planar display portions is positioned on one of the sides. 13. The apparatus according to claim 12, wherein the second display portion is approximately perpendicular to the surface upon which the document processing apparatus rests. 14. The apparatus according to claim 8, wherein the at least two planar display portions are different sizes. 15. A method comprising: determining, by a processor of a document processing apparatus, a machine/job state of the document processing apparatus, wherein the document processing apparatus comprises a user interface connected to an exterior of the document processing apparatus, wherein the user interface includes a screen having at least two planar display portions that lie in different planes, displaying alphanumeric characters of a machine/job state of the document processing apparatus on a first display portion of the at least two planar display portions; and displaying only a graphic item lacking alphanumeric characters indicative of the machine/job state of the document processing apparatus on a second display portion of the at least two planar display portions while the first display portion displays the alphanumeric characters of the machine/job state, wherein the first display portion of the at least two planar display portions is positioned on a top of the document processing apparatus and the second display portion of the at least two planar display portions is positioned on a side of the document processing app, and wherein the side is approximately perpendicular to the top. 16. The method according to claim 15, wherein the displaying the graphic item indicates the machine/job state of the document processing apparatus using facial expressions. 17. The method according to claim 15, wherein the displaying the graphic item indicates the machine/job state of the document processing apparatus using engineering and math icons. 18. The method according to claim 15, wherein the displaying the graphic item indicates the machine/job state by graphically displaying error conditions, warning conditions, active processing conditions, and processing complete conditions. 19. The method according to claim 15, wherein the document processing apparatus has a bottom adjacent a surface upon which the document processing apparatus rests, wherein the top is opposite the bottom, and wherein the side is between the top and the bottom. 20. The method according to claim 15, wherein the at least two planar display portions are non-parallel to one another and are different sizes.
A processor automatically determines the machine/job state of a document processing apparatus. The job/status data is automatically displayed on a first display portion of a multi-part screen of the document processing apparatus. A graphic item that is indicative of the machine/job state is only displayed on a second display portion of the multi-part screen of the document processing apparatus. The graphic item indicates the machine/job state of the document processing apparatus using facial expressions, engineering and/or math icons, etc.1. An apparatus comprising: a document processing apparatus; a user interface connected to an exterior of the document processing apparatus, wherein the user interface includes a screen having at least two planar display portions that lie in different planes; and a processor operatively connected to the screen, wherein a first display portion of the at least two planar display portions is positioned on a top of the document processing apparatus and a second display portion of the at least two planar display portions is positioned on a side of the document processing apparatus, wherein the side is approximately perpendicular to the top, and wherein the processor is adapted to automatically cause the first display portion to display alphanumeric characters of a machine/job state of the document processing apparatus, and automatically cause the second display portion to only display a graphic item lacking alphanumeric characters indicative of the machine/job state while the first display portion displays the alphanumeric characters of the machine/job state. 2. The apparatus according to claim 1, wherein the graphic item indicates the machine/job state of the document processing apparatus using facial expressions. 3. The apparatus according to claim 1, wherein the graphic item indicates the machine/job state of the document processing apparatus using engineering and math icons. 4. The apparatus according to claim 1, wherein the machine/job state of the document processing apparatus comprises error conditions, warning conditions, active processing conditions, and processing complete conditions. 5. The apparatus according to claim 1, wherein the document processing apparatus has a bottom adjacent a surface upon which the document processing apparatus rests, wherein the top is opposite the bottom, and wherein the side is between the top and the bottom. 6. The apparatus according to claim 1, wherein the at least two planar display portions are non-parallel to one another. 7. The apparatus according to claim 1, wherein the at least two planar display portions are different sizes. 8. An apparatus comprising: a document processing apparatus; a curved user interface connected to at least two non-parallel surfaces of an exterior of the document processing apparatus, wherein the curved user interface includes a screen having at least two planar display portions; and a processor operatively connected to the screen, wherein a first display portion of the at least two planar display portions is positioned on a top of the document processing apparatus and a second display portion of the at least two planar display portions is positioned on a side of the document processing apparatus, wherein the side is approximately perpendicular to the top, and wherein the processor is adapted to automatically cause the first display portion to display alphanumeric characters of a machine/job state of the document processing apparatus, and automatically cause the second display portion to only display a graphic item lacking alphanumeric characters indicative of the machine/job state while the first display portion displays the alphanumeric characters of the machine/job state. 9. The apparatus according to claim 8, wherein the graphic item indicates the machine/job state of the document processing apparatus using facial expressions. 10. The apparatus according to claim 8, wherein the graphic item indicates the machine/job state of the document processing apparatus using engineering and math icons. 11. The apparatus according to claim 8, wherein the machine/job state of the document processing apparatus comprises error conditions, warning conditions, active processing conditions, and processing complete conditions. 12. The apparatus according to claim 8, wherein the document processing apparatus has a bottom adjacent a surface upon which the document processing apparatus rests, wherein the top is opposite the bottom, and sides between the top and the bottom and approximately perpendicular to the top and the bottom, wherein a first display portion at least two planar display portions is positioned on the top and a second display portion at least two planar display portions is positioned on one of the sides. 13. The apparatus according to claim 12, wherein the second display portion is approximately perpendicular to the surface upon which the document processing apparatus rests. 14. The apparatus according to claim 8, wherein the at least two planar display portions are different sizes. 15. A method comprising: determining, by a processor of a document processing apparatus, a machine/job state of the document processing apparatus, wherein the document processing apparatus comprises a user interface connected to an exterior of the document processing apparatus, wherein the user interface includes a screen having at least two planar display portions that lie in different planes, displaying alphanumeric characters of a machine/job state of the document processing apparatus on a first display portion of the at least two planar display portions; and displaying only a graphic item lacking alphanumeric characters indicative of the machine/job state of the document processing apparatus on a second display portion of the at least two planar display portions while the first display portion displays the alphanumeric characters of the machine/job state, wherein the first display portion of the at least two planar display portions is positioned on a top of the document processing apparatus and the second display portion of the at least two planar display portions is positioned on a side of the document processing app, and wherein the side is approximately perpendicular to the top. 16. The method according to claim 15, wherein the displaying the graphic item indicates the machine/job state of the document processing apparatus using facial expressions. 17. The method according to claim 15, wherein the displaying the graphic item indicates the machine/job state of the document processing apparatus using engineering and math icons. 18. The method according to claim 15, wherein the displaying the graphic item indicates the machine/job state by graphically displaying error conditions, warning conditions, active processing conditions, and processing complete conditions. 19. The method according to claim 15, wherein the document processing apparatus has a bottom adjacent a surface upon which the document processing apparatus rests, wherein the top is opposite the bottom, and wherein the side is between the top and the bottom. 20. The method according to claim 15, wherein the at least two planar display portions are non-parallel to one another and are different sizes.
2,600
10,843
10,843
16,250,363
2,643
Disclosed are various embodiments for restricting usage of a mobile device when a user is driving a vehicle. In one embodiment, it is determined that a mobile device is in use by a driver of an active vehicle. A functionality of the mobile device is then restricted based at least in part on determining that the mobile device is in use by the driver of the active vehicle. For example, a touch screen of the mobile device may be disabled, and the use of a hands-free interface may be made mandatory.
1. A method, comprising: receiving, by a mobile device from a server, information indicating one or more restrictions to a functionality of the mobile device; detecting, by at least one computing device comprising the mobile device, that the mobile device is in use by a driver within an active vehicle by detecting a signal emitted by the active vehicle; and applying, by the at least one computing device, the one or more restrictions to the functionality of the mobile device in response to detecting that the mobile device is in use by the driver within the active vehicle. 2. The method of claim 1, wherein the signal emitted by the active vehicle comprises a near-field signal emitted from at least one of: a seat or a dashboard. 3. The method of claim 1, wherein the signal emitted by the active vehicle indicates that the driver is a sole seated occupant of the active vehicle. 4. The method of claim 1, wherein detecting that the mobile device is in use by the driver within the active vehicle further comprises comparing a strength of the signal to one or more thresholds. 5. The method of claim 1, further comprising: determining a current time; and selectively enabling the one or more restrictions based at least in part on the current time. 6. The method of claim 1, wherein applying the one or more restrictions to the functionality of the mobile device further comprises automatically enabling a restricted version of an application in the mobile device in lieu of an unrestricted version of the application based at least in part on detecting that the mobile device is in use by the driver within the active vehicle. 7. The method of claim 1, further comprising receiving updated information from the server specifying an update to the one or more restrictions. 8. The method of claim 1, further comprising: restoring, by the at least one computing device, a previous functionality of the mobile device in response to determining at least one of: that a user of the mobile device has performed a predefined task; that the active vehicle is no longer in motion; or that the user of the mobile device is no longer the driver. 9. A mobile device, comprising: instructions embedded in a memory that, when executed by a processor, cause the processor to at least: receive information from a server specifying one or more restrictions on usage of the mobile device; determine a current time; selectively enable the one or more restrictions based at least in part on the current time; determine that the mobile device is in use by a driver within an active vehicle; and restrict a functionality of the mobile device according to the one or more restrictions based at least in part on determining that the mobile device is in use by the driver within the active vehicle. 10. The mobile device of claim 9, wherein a first restriction of the one or more restrictions is associated with a periodic time window for which the first restriction is selectively enabled. 11. The mobile device of claim 9, wherein restricting the functionality of the mobile device according to the one or more restrictions further comprises automatically enabling a restricted version of an application in the mobile device in lieu of an unrestricted version of the application based at least in part on determining that the mobile device is in use by the driver within the active vehicle. 12. The mobile device of claim 9, wherein determining that the mobile device is in use by the driver within the active vehicle further comprises detecting a signal emitted by the active vehicle. 13. The mobile device of claim 9, wherein the instructions, when executed by the processor, further cause the processor to at least receive updated information from the server specifying an update to the one or more restrictions. 14. A mobile device, comprising: instructions embedded in a memory that, when executed by a processor, cause the processor to at least: receive information from a server specifying one or more restrictions on usage of the mobile device; determine that the mobile device is in use by a driver within an active vehicle; and automatically enable a restricted version of an application in the mobile device in lieu of an unrestricted version of the application based at least in part on determining that the mobile device is in use by the driver within the active vehicle and the one or more restrictions. 15. The mobile device of claim 14, wherein the instructions, when executed by the processor, further cause the processor to at least disable the unrestricted version of the application in response to enabling the restricted version of the application. 16. The mobile device of claim 14, wherein determining that the mobile device is in use by the driver within the active vehicle further comprises detecting a signal emitted by the active vehicle. 17. The mobile device of claim 14, wherein the one or more restrictions are selectively enabled based at least in part on a current time. 18. The mobile device of claim 14, wherein the instructions, when executed by the processor, further cause the processor to at least receive updated information from the server specifying an update to the one or more restrictions. 19. The mobile device of claim 14, wherein determining that the mobile device is in use by the driver of the active vehicle further comprises determining that the mobile device is not moving along a scheduled route of a mass transportation vehicle. 20. The mobile device of claim 14, wherein determining that the mobile device is in use by the driver of the active vehicle further comprises determining that the mobile device does not detect a predetermined signal associated with a mass transportation vehicle.
Disclosed are various embodiments for restricting usage of a mobile device when a user is driving a vehicle. In one embodiment, it is determined that a mobile device is in use by a driver of an active vehicle. A functionality of the mobile device is then restricted based at least in part on determining that the mobile device is in use by the driver of the active vehicle. For example, a touch screen of the mobile device may be disabled, and the use of a hands-free interface may be made mandatory.1. A method, comprising: receiving, by a mobile device from a server, information indicating one or more restrictions to a functionality of the mobile device; detecting, by at least one computing device comprising the mobile device, that the mobile device is in use by a driver within an active vehicle by detecting a signal emitted by the active vehicle; and applying, by the at least one computing device, the one or more restrictions to the functionality of the mobile device in response to detecting that the mobile device is in use by the driver within the active vehicle. 2. The method of claim 1, wherein the signal emitted by the active vehicle comprises a near-field signal emitted from at least one of: a seat or a dashboard. 3. The method of claim 1, wherein the signal emitted by the active vehicle indicates that the driver is a sole seated occupant of the active vehicle. 4. The method of claim 1, wherein detecting that the mobile device is in use by the driver within the active vehicle further comprises comparing a strength of the signal to one or more thresholds. 5. The method of claim 1, further comprising: determining a current time; and selectively enabling the one or more restrictions based at least in part on the current time. 6. The method of claim 1, wherein applying the one or more restrictions to the functionality of the mobile device further comprises automatically enabling a restricted version of an application in the mobile device in lieu of an unrestricted version of the application based at least in part on detecting that the mobile device is in use by the driver within the active vehicle. 7. The method of claim 1, further comprising receiving updated information from the server specifying an update to the one or more restrictions. 8. The method of claim 1, further comprising: restoring, by the at least one computing device, a previous functionality of the mobile device in response to determining at least one of: that a user of the mobile device has performed a predefined task; that the active vehicle is no longer in motion; or that the user of the mobile device is no longer the driver. 9. A mobile device, comprising: instructions embedded in a memory that, when executed by a processor, cause the processor to at least: receive information from a server specifying one or more restrictions on usage of the mobile device; determine a current time; selectively enable the one or more restrictions based at least in part on the current time; determine that the mobile device is in use by a driver within an active vehicle; and restrict a functionality of the mobile device according to the one or more restrictions based at least in part on determining that the mobile device is in use by the driver within the active vehicle. 10. The mobile device of claim 9, wherein a first restriction of the one or more restrictions is associated with a periodic time window for which the first restriction is selectively enabled. 11. The mobile device of claim 9, wherein restricting the functionality of the mobile device according to the one or more restrictions further comprises automatically enabling a restricted version of an application in the mobile device in lieu of an unrestricted version of the application based at least in part on determining that the mobile device is in use by the driver within the active vehicle. 12. The mobile device of claim 9, wherein determining that the mobile device is in use by the driver within the active vehicle further comprises detecting a signal emitted by the active vehicle. 13. The mobile device of claim 9, wherein the instructions, when executed by the processor, further cause the processor to at least receive updated information from the server specifying an update to the one or more restrictions. 14. A mobile device, comprising: instructions embedded in a memory that, when executed by a processor, cause the processor to at least: receive information from a server specifying one or more restrictions on usage of the mobile device; determine that the mobile device is in use by a driver within an active vehicle; and automatically enable a restricted version of an application in the mobile device in lieu of an unrestricted version of the application based at least in part on determining that the mobile device is in use by the driver within the active vehicle and the one or more restrictions. 15. The mobile device of claim 14, wherein the instructions, when executed by the processor, further cause the processor to at least disable the unrestricted version of the application in response to enabling the restricted version of the application. 16. The mobile device of claim 14, wherein determining that the mobile device is in use by the driver within the active vehicle further comprises detecting a signal emitted by the active vehicle. 17. The mobile device of claim 14, wherein the one or more restrictions are selectively enabled based at least in part on a current time. 18. The mobile device of claim 14, wherein the instructions, when executed by the processor, further cause the processor to at least receive updated information from the server specifying an update to the one or more restrictions. 19. The mobile device of claim 14, wherein determining that the mobile device is in use by the driver of the active vehicle further comprises determining that the mobile device is not moving along a scheduled route of a mass transportation vehicle. 20. The mobile device of claim 14, wherein determining that the mobile device is in use by the driver of the active vehicle further comprises determining that the mobile device does not detect a predetermined signal associated with a mass transportation vehicle.
2,600
10,844
10,844
16,294,217
2,621
A system may include a resistive-inductive-capacitive sensor, a measurement circuit communicatively coupled to the resistive-inductive-capacitive sensor and configured to measure phase information associated with the resistive-inductive-capacitive sensor and based on the phase information, determine a displacement of a mechanical member relative to the resistive-inductive-capacitive sensor. The system may also include a Q factor enhancer communicatively coupled to the resistive-inductive-capacitive sensor and configured to control a Q factor of the resistive-inductive-capacitive sensor.
1. A system comprising: a resistive-inductive-capacitive sensor; a measurement circuit communicatively coupled to the resistive-inductive-capacitive sensor and configured to: measure phase information associated with the resistive-inductive-capacitive sensor; and based on the phase information, determine a displacement of a mechanical member relative to the resistive-inductive-capacitive sensor; and a Q factor enhancer communicatively coupled to the resistive-inductive-capacitive sensor and configured to control a Q factor of the resistive-inductive-capacitive sensor. 2. The system of claim 1, wherein the Q factor enhancer implements a negative impedance that at least partially cancels an impedance of the resistive-inductive-capacitive sensor. 3. The system of claim 1, further comprising a Q factor detector coupled to the Q factor enhancer and configured to monitor the Q factor based on the phase information and based on amplitude information associated with the resistive-inductive-capacitive sensor. 4. The system of claim 3, wherein the Q factor enhancer is further configured to control the Q factor to maintain the Q factor as measured by the Q factor detector within one or more predetermined thresholds. 5. The system of claim 4, wherein the Q factor enhancer and the Q factor detector form at least a portion of a control loop that comprises at least one of a feedforward path and a feedback path. 6. The system of claim 5, wherein a bandwidth of the control loop is set to avoid interference with an algorithm of the measurement circuit for determining the displacement of the mechanical member. 7. The system of claim 5, wherein the Q factor detector determines whether the Q factor is within the one or more predetermined thresholds based on at least one or more of a slope of a phase of the resistive-inductive-capacitive sensor as indicated by the phase information, an absolute phase of the resistive-inductive-capacitive sensor as indicated by the phase information, and an amplitude as indicated by the amplitude information. 8. The system of claim 7, wherein the measurement circuit is configured to account for modification of the Q factor by the Q factor enhancer in determining the phase information. 9. The system of claim 5, wherein the control loop controls the Q factor in such a manner to prevent oscillation of the resistive-inductive-capacitive sensor. 10. The system of claim 1, wherein the measurement circuit comprises a coherent incident/quadrature detector and the measurement circuit is configured to measure the phase information using the coherent incident/quadrature detector. 11. A method comprising: measuring phase information associated with the resistive-inductive-capacitive sensor; based on the phase information, determining a displacement of a mechanical member relative to the resistive-inductive-capacitive sensor; and controlling a Q factor of the resistive-inductive-capacitive sensor with a Q factor enhancer communicatively coupled to the resistive-inductive-capacitive sensor. 12. The method of claim 11, wherein the Q factor enhancer implements a negative impedance that at least partially cancels an impedance of the resistive-inductive-capacitive sensor. 13. The method of claim 11, further comprising monitoring the Q factor based on the phase information and based on amplitude information associated with the resistive-inductive-capacitive sensor with a Q factor detector coupled to the Q factor enhancer. 14. The method of claim 13, further comprising controlling, with the Q factor enhancer, the Q factor to maintain the Q factor as measured by the Q factor detector within one or more predetermined thresholds. 15. The method of claim 14, wherein the Q factor enhancer and the Q factor detector form at least a portion of a control loop that comprises at least one of a feedforward path and a feedback path. 16. The method of claim 15, wherein a bandwidth of the control loop is set to avoid interference with an algorithm of the measurement circuit for determining the displacement of the mechanical member. 17. The method of claim 15, further comprising determining, with the Q factor detector, whether the Q factor is within the one or more predetermined thresholds based on at least one or more of a slope of a phase of the resistive-inductive-capacitive sensor as indicated by the phase information, an absolute phase of the resistive-inductive-capacitive sensor as indicated by the phase information, and an amplitude as indicated by the amplitude information. 18. The method of claim 17, further comprising accounting for modification of the Q factor by the Q factor enhancer in determining the phase information. 19. The method of claim 15, wherein the control loop controls the Q factor in such a manner to prevent oscillation of the resistive-inductive-capacitive sensor. 20. The method of claim 11, wherein measuring the phase information comprises measuring the phase information with a coherent incident/quadrature detector.
A system may include a resistive-inductive-capacitive sensor, a measurement circuit communicatively coupled to the resistive-inductive-capacitive sensor and configured to measure phase information associated with the resistive-inductive-capacitive sensor and based on the phase information, determine a displacement of a mechanical member relative to the resistive-inductive-capacitive sensor. The system may also include a Q factor enhancer communicatively coupled to the resistive-inductive-capacitive sensor and configured to control a Q factor of the resistive-inductive-capacitive sensor.1. A system comprising: a resistive-inductive-capacitive sensor; a measurement circuit communicatively coupled to the resistive-inductive-capacitive sensor and configured to: measure phase information associated with the resistive-inductive-capacitive sensor; and based on the phase information, determine a displacement of a mechanical member relative to the resistive-inductive-capacitive sensor; and a Q factor enhancer communicatively coupled to the resistive-inductive-capacitive sensor and configured to control a Q factor of the resistive-inductive-capacitive sensor. 2. The system of claim 1, wherein the Q factor enhancer implements a negative impedance that at least partially cancels an impedance of the resistive-inductive-capacitive sensor. 3. The system of claim 1, further comprising a Q factor detector coupled to the Q factor enhancer and configured to monitor the Q factor based on the phase information and based on amplitude information associated with the resistive-inductive-capacitive sensor. 4. The system of claim 3, wherein the Q factor enhancer is further configured to control the Q factor to maintain the Q factor as measured by the Q factor detector within one or more predetermined thresholds. 5. The system of claim 4, wherein the Q factor enhancer and the Q factor detector form at least a portion of a control loop that comprises at least one of a feedforward path and a feedback path. 6. The system of claim 5, wherein a bandwidth of the control loop is set to avoid interference with an algorithm of the measurement circuit for determining the displacement of the mechanical member. 7. The system of claim 5, wherein the Q factor detector determines whether the Q factor is within the one or more predetermined thresholds based on at least one or more of a slope of a phase of the resistive-inductive-capacitive sensor as indicated by the phase information, an absolute phase of the resistive-inductive-capacitive sensor as indicated by the phase information, and an amplitude as indicated by the amplitude information. 8. The system of claim 7, wherein the measurement circuit is configured to account for modification of the Q factor by the Q factor enhancer in determining the phase information. 9. The system of claim 5, wherein the control loop controls the Q factor in such a manner to prevent oscillation of the resistive-inductive-capacitive sensor. 10. The system of claim 1, wherein the measurement circuit comprises a coherent incident/quadrature detector and the measurement circuit is configured to measure the phase information using the coherent incident/quadrature detector. 11. A method comprising: measuring phase information associated with the resistive-inductive-capacitive sensor; based on the phase information, determining a displacement of a mechanical member relative to the resistive-inductive-capacitive sensor; and controlling a Q factor of the resistive-inductive-capacitive sensor with a Q factor enhancer communicatively coupled to the resistive-inductive-capacitive sensor. 12. The method of claim 11, wherein the Q factor enhancer implements a negative impedance that at least partially cancels an impedance of the resistive-inductive-capacitive sensor. 13. The method of claim 11, further comprising monitoring the Q factor based on the phase information and based on amplitude information associated with the resistive-inductive-capacitive sensor with a Q factor detector coupled to the Q factor enhancer. 14. The method of claim 13, further comprising controlling, with the Q factor enhancer, the Q factor to maintain the Q factor as measured by the Q factor detector within one or more predetermined thresholds. 15. The method of claim 14, wherein the Q factor enhancer and the Q factor detector form at least a portion of a control loop that comprises at least one of a feedforward path and a feedback path. 16. The method of claim 15, wherein a bandwidth of the control loop is set to avoid interference with an algorithm of the measurement circuit for determining the displacement of the mechanical member. 17. The method of claim 15, further comprising determining, with the Q factor detector, whether the Q factor is within the one or more predetermined thresholds based on at least one or more of a slope of a phase of the resistive-inductive-capacitive sensor as indicated by the phase information, an absolute phase of the resistive-inductive-capacitive sensor as indicated by the phase information, and an amplitude as indicated by the amplitude information. 18. The method of claim 17, further comprising accounting for modification of the Q factor by the Q factor enhancer in determining the phase information. 19. The method of claim 15, wherein the control loop controls the Q factor in such a manner to prevent oscillation of the resistive-inductive-capacitive sensor. 20. The method of claim 11, wherein measuring the phase information comprises measuring the phase information with a coherent incident/quadrature detector.
2,600
10,845
10,845
15,900,020
2,623
Systems and methods including detachable passive or interactive display that interface with fixed size portable communication or display device providing variable size display capability. Detachable display can comprise compartment for seamlessly accommodating portable communication or display device. Interface between the portable device and the detachable display can be implemented by wired or wireless communication. Detachable display can include weatherproof body, and compartment accommodating portable device can include weatherproof seal protecting the portable device. Detachable display device can include an additional user interface providing functionality of any complexity, from basic on/off and volume control switches to complex interactive menu navigation tools and other controls such as touch pads and biometric sensors. Detachable display device can include data input and output capabilities, such as text, audio and video, to other external devices.
1. A system comprising: a portable device including a first body and a first display screen providing a visual output of said portable device; and a host device including a second body and a second display screen and a compartment within said second body for removably accommodating said portable device, wherein said second display screen interfaces with said portable device via at least one of a wired and a wireless communication when said portable device is placed in said compartment, such that said second display screen mimics at least one functional feature of said first display screen. 2. The system as claimed in claim 1, wherein said at least on functional features of said first display screen comprises an interactive input of said portable device. 3. The system as claimed in claim 1, wherein said portable device is a mobile communication device. 4. The system as claimed in claim 1, wherein said host device further comprises at least one of an input and an output port providing at least one of an external power and an external communication to at least one of said host device and said portable device. 5. The system of claim 1, further comprising a cover to facilitate retention of said portable device in said compartment of said host device. 6. The system of claim 5, wherein said second body and said cover form a weatherproof seal for said portable device in said compartment of said host device. 7. The system of claim 1, wherein said second display screen has a larger surface display area than said first display screen. 8. A display comprising: a first body; a microprocessor; a communication interface configured for at least one of wired and wireless communication; a first display screen; and a compartment within said first body for removably accommodating a portable device having a second display screen; wherein said microprocessor interfaces with said portable device via said communication interface when said portable device is placed in said compartment, such that said first display screen mimics at least one functional feature of said second display screen. 9. The display as claimed in claim 8, wherein said at least on functional features of said first display screen comprises an interactive input of said portable device. 10. The display as claimed in claim 8, wherein said portable device is a mobile communication device. 11. The display as claimed in claim 8, further comprising at least one of an input and an output port providing at least one of an external power and an external communication to at least one of said microprocessor, said communication interface, said first display screen, and said portable device. 12. The display as claimed in claim 8, further comprising a cove to facilitate retention of said portable device in said compartment. 13. The display as claimed in claim 8, wherein said body and said cover form a weatherproof seal for said portable device in said compartment. 14. The display as claimed in claim 8, wherein said first display screen has a larger surface display area than said second display screen. 15. A system comprising: a portable device including a first body, a first display screen, and a first communication interface; a host device including a second body, a second display screen, and a second communication interface; and a connector removably attachable to said portable device, said connector comprising: a third communication interface for establishing a first connection to said first communication interface, and a fourth communication interface for establishing a second connection to said second communication interface, wherein said first connection comprises a male-to-female connection between said first and third communication interface, and said second connection comprises a magnetic connection between said first and third communication interface, and wherein said host device interfaces with said portable device via said first and second connection when said portable device is attached to said connector to form said first connection and positioned with respect to said host device to form a said second connection, such that said second display screen selectively or automatically mimics at least one functional feature of said first display screen. 16. The system as claimed in claim 15, wherein said at least on functional features of said first display screen comprises an interactive input of said portable device. 17. The system as claimed in claim 15, wherein said portable device is a mobile communication device. 18. The system as claimed in claim 15, wherein said at least on functional features of said first display screen comprises a visual output of said portable device. 19. The system as claimed in claim 1, wherein said at least on functional features of said first display screen comprises a visual output of said portable device. 20. The device as claimed in claim 8, wherein said at least on functional features of said first display screen comprises a visual output of said portable device.
Systems and methods including detachable passive or interactive display that interface with fixed size portable communication or display device providing variable size display capability. Detachable display can comprise compartment for seamlessly accommodating portable communication or display device. Interface between the portable device and the detachable display can be implemented by wired or wireless communication. Detachable display can include weatherproof body, and compartment accommodating portable device can include weatherproof seal protecting the portable device. Detachable display device can include an additional user interface providing functionality of any complexity, from basic on/off and volume control switches to complex interactive menu navigation tools and other controls such as touch pads and biometric sensors. Detachable display device can include data input and output capabilities, such as text, audio and video, to other external devices.1. A system comprising: a portable device including a first body and a first display screen providing a visual output of said portable device; and a host device including a second body and a second display screen and a compartment within said second body for removably accommodating said portable device, wherein said second display screen interfaces with said portable device via at least one of a wired and a wireless communication when said portable device is placed in said compartment, such that said second display screen mimics at least one functional feature of said first display screen. 2. The system as claimed in claim 1, wherein said at least on functional features of said first display screen comprises an interactive input of said portable device. 3. The system as claimed in claim 1, wherein said portable device is a mobile communication device. 4. The system as claimed in claim 1, wherein said host device further comprises at least one of an input and an output port providing at least one of an external power and an external communication to at least one of said host device and said portable device. 5. The system of claim 1, further comprising a cover to facilitate retention of said portable device in said compartment of said host device. 6. The system of claim 5, wherein said second body and said cover form a weatherproof seal for said portable device in said compartment of said host device. 7. The system of claim 1, wherein said second display screen has a larger surface display area than said first display screen. 8. A display comprising: a first body; a microprocessor; a communication interface configured for at least one of wired and wireless communication; a first display screen; and a compartment within said first body for removably accommodating a portable device having a second display screen; wherein said microprocessor interfaces with said portable device via said communication interface when said portable device is placed in said compartment, such that said first display screen mimics at least one functional feature of said second display screen. 9. The display as claimed in claim 8, wherein said at least on functional features of said first display screen comprises an interactive input of said portable device. 10. The display as claimed in claim 8, wherein said portable device is a mobile communication device. 11. The display as claimed in claim 8, further comprising at least one of an input and an output port providing at least one of an external power and an external communication to at least one of said microprocessor, said communication interface, said first display screen, and said portable device. 12. The display as claimed in claim 8, further comprising a cove to facilitate retention of said portable device in said compartment. 13. The display as claimed in claim 8, wherein said body and said cover form a weatherproof seal for said portable device in said compartment. 14. The display as claimed in claim 8, wherein said first display screen has a larger surface display area than said second display screen. 15. A system comprising: a portable device including a first body, a first display screen, and a first communication interface; a host device including a second body, a second display screen, and a second communication interface; and a connector removably attachable to said portable device, said connector comprising: a third communication interface for establishing a first connection to said first communication interface, and a fourth communication interface for establishing a second connection to said second communication interface, wherein said first connection comprises a male-to-female connection between said first and third communication interface, and said second connection comprises a magnetic connection between said first and third communication interface, and wherein said host device interfaces with said portable device via said first and second connection when said portable device is attached to said connector to form said first connection and positioned with respect to said host device to form a said second connection, such that said second display screen selectively or automatically mimics at least one functional feature of said first display screen. 16. The system as claimed in claim 15, wherein said at least on functional features of said first display screen comprises an interactive input of said portable device. 17. The system as claimed in claim 15, wherein said portable device is a mobile communication device. 18. The system as claimed in claim 15, wherein said at least on functional features of said first display screen comprises a visual output of said portable device. 19. The system as claimed in claim 1, wherein said at least on functional features of said first display screen comprises a visual output of said portable device. 20. The device as claimed in claim 8, wherein said at least on functional features of said first display screen comprises a visual output of said portable device.
2,600
10,846
10,846
16,237,285
2,613
Embodiments described herein provide a system for facilitating dynamic assistance to a user in an augmented reality (AR) environment of an AR device. During operation, the system detects a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task. The system then determines an orientation and an alignment of the first element in the physical world of the user, and an overlay for the first element. The overlay can distinctly highlight one or more regions of the first element and indicate how the first element fits in the object. The system then applies the overlay to the one or more regions of the first element at the determined orientation in the AR environment.
1. A method for facilitating dynamic assistance to a user in an augmented reality (AR) environment of an AR device, comprising: detecting, by the AR device, a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task; determining an orientation and an alignment of the first element in physical world of the user; determining an overlay for the first element, wherein the overlay distinctly highlights one or more regions of the first element, and wherein the overlay indicates how the first element fits in the object based on the one or more regions; and highlighting the one or more regions in the AR environment by placing the overlay on the one or more regions of the first element at the determined orientation in the AR environment. 2. The method of claim 1, further comprising: determining that the first element is needed for the step; and projecting a hologram of the first element prior to locating the first element in an operating range of the AR device. 3. The method of claim 1, wherein the overlay includes a distinct mark for a respective region of the one or more regions of the first element, wherein the distinct mark indicates how the region fits with one or more other elements of the object. 4. The method of claim 1, further comprising: determining whether a tool or a fastener is needed for the step; determining that the tool or the fastener is in an operating range of the AR device; and highlighting the tool or the fastener in the AR environment. 5. The method of claim 1, further comprising: determining one or more elements of the object that are attachable to the first element; and projecting a hologram depicting the first element attached to the one or more elements in the AR environment. 6. The method of claim 1, further comprising enhancing a resolution of an image of the first element in the AR environment, wherein the enhancing include increasing, within the AR environment, prominence of a symmetry-breaking feature of the first element. 7. The method of claim 1, further comprising: determining a first region of the one or more regions of the first element, wherein the first region is attachable to a second region of a second element of the object; and setting a same mark in the overlay for the first and second regions; and wherein placing the overlay comprises placing the same mark on the first and second regions. 8. The method of claim 1, further comprising: determining whether the AR device is capable of determining that the step is complete; in response to the AR device determining that the step is complete, detecting a third element of the object using the object detector, wherein the third element is associated with a subsequent step of the step; and in response to the AR device being unable to determine that the step is complete, waiting for an instruction from the user. 9. The method of claim 1, further comprising: obtaining a three-dimensional model of the first element; identifying the one or more regions in the three-dimensional model; and placing the overlay based on the identified one or more regions in the three-dimensional model. 10. The method of claim 1, further comprising determining the step of the task from a task model that includes one or more steps for completing the task, and wherein one or more elements of the object associated with a respective step of the one or more steps. 11. An apparatus providing dynamic assistance to a user in an augmented reality (AR) environment, comprising: a processor; a storage device coupled to the processor and storing instructions that when executed by the processor cause the processor to perform a method, the method comprising: detecting, by the AR device, a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task; determining an orientation and an alignment of the first element in physical world of the user; determining an overlay for the first element, wherein the overlay distinctly highlights one or more regions of the first element, and wherein the overlay indicates how the first element fits in the object based on the one or more regions; and highlighting the one or more regions in the AR environment by placing the overlay on the one or more regions of the first element at the determined orientation in the AR environment. 12. The apparatus of claim 11, wherein the method further comprises: determining that the first element is needed for the step; and projecting a hologram of the first element prior to locating the first element in an operating range of the apparatus. 13. The apparatus of claim 11, wherein the overlay includes a distinct mark for a respective region of the one or more regions of the first element, wherein the distinct mark indicates how the region fits with one or more other elements of the object. 14. The apparatus of claim 11, wherein the method further comprises: determining whether a tool or a fastener is needed for the step; determining that the tool or the fastener is in an operating range of the apparatus; and highlighting the tool or the fastener in the AR environment. 15. The apparatus of claim 11, wherein the method further comprises: determining one or more elements of the object that are attachable to the first element; and projecting a hologram depicting the first element attached to the one or more elements in the AR environment. 16. The apparatus of claim 11, wherein the method further comprises enhancing a resolution of an image of the first element in the AR environment, wherein the enhancing include increasing, within the AR environment, prominence of a symmetry-breaking feature of the first element. 17. The apparatus of claim 11, wherein the method further comprises: determining a first region of the one or more regions of the first element, wherein the first region is attachable to a second region of a second element of the object; and setting a same mark in the overlay for the first and second regions; and wherein placing the overlay comprises placing the same mark on the first and second regions. 18. The apparatus of claim 11, wherein the method further comprises: determining whether the apparatus is capable of determining that the step is complete; in response to the apparatus determining that the step is complete, detecting a third element of the object using the object detector, wherein the third element is associated with a subsequent step of the step; and in response to the apparatus being unable to determine that the step is complete, waiting for an instruction from the user. 19. The apparatus of claim 11, wherein the method further comprises: obtaining a three-dimensional model of the first element; identifying the one or more regions in the three-dimensional model; and placing the overlay based on the identified one or more regions in the three-dimensional model. 20. The apparatus of claim 11, wherein the method further comprises determining the step of the task from a task model that includes one or more steps for completing the task, and wherein one or more elements of the object associated with a respective step of the one or more steps.
Embodiments described herein provide a system for facilitating dynamic assistance to a user in an augmented reality (AR) environment of an AR device. During operation, the system detects a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task. The system then determines an orientation and an alignment of the first element in the physical world of the user, and an overlay for the first element. The overlay can distinctly highlight one or more regions of the first element and indicate how the first element fits in the object. The system then applies the overlay to the one or more regions of the first element at the determined orientation in the AR environment.1. A method for facilitating dynamic assistance to a user in an augmented reality (AR) environment of an AR device, comprising: detecting, by the AR device, a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task; determining an orientation and an alignment of the first element in physical world of the user; determining an overlay for the first element, wherein the overlay distinctly highlights one or more regions of the first element, and wherein the overlay indicates how the first element fits in the object based on the one or more regions; and highlighting the one or more regions in the AR environment by placing the overlay on the one or more regions of the first element at the determined orientation in the AR environment. 2. The method of claim 1, further comprising: determining that the first element is needed for the step; and projecting a hologram of the first element prior to locating the first element in an operating range of the AR device. 3. The method of claim 1, wherein the overlay includes a distinct mark for a respective region of the one or more regions of the first element, wherein the distinct mark indicates how the region fits with one or more other elements of the object. 4. The method of claim 1, further comprising: determining whether a tool or a fastener is needed for the step; determining that the tool or the fastener is in an operating range of the AR device; and highlighting the tool or the fastener in the AR environment. 5. The method of claim 1, further comprising: determining one or more elements of the object that are attachable to the first element; and projecting a hologram depicting the first element attached to the one or more elements in the AR environment. 6. The method of claim 1, further comprising enhancing a resolution of an image of the first element in the AR environment, wherein the enhancing include increasing, within the AR environment, prominence of a symmetry-breaking feature of the first element. 7. The method of claim 1, further comprising: determining a first region of the one or more regions of the first element, wherein the first region is attachable to a second region of a second element of the object; and setting a same mark in the overlay for the first and second regions; and wherein placing the overlay comprises placing the same mark on the first and second regions. 8. The method of claim 1, further comprising: determining whether the AR device is capable of determining that the step is complete; in response to the AR device determining that the step is complete, detecting a third element of the object using the object detector, wherein the third element is associated with a subsequent step of the step; and in response to the AR device being unable to determine that the step is complete, waiting for an instruction from the user. 9. The method of claim 1, further comprising: obtaining a three-dimensional model of the first element; identifying the one or more regions in the three-dimensional model; and placing the overlay based on the identified one or more regions in the three-dimensional model. 10. The method of claim 1, further comprising determining the step of the task from a task model that includes one or more steps for completing the task, and wherein one or more elements of the object associated with a respective step of the one or more steps. 11. An apparatus providing dynamic assistance to a user in an augmented reality (AR) environment, comprising: a processor; a storage device coupled to the processor and storing instructions that when executed by the processor cause the processor to perform a method, the method comprising: detecting, by the AR device, a first element of an object using an object detector, wherein the object is associated with a task and the first element is associated with a step of the task; determining an orientation and an alignment of the first element in physical world of the user; determining an overlay for the first element, wherein the overlay distinctly highlights one or more regions of the first element, and wherein the overlay indicates how the first element fits in the object based on the one or more regions; and highlighting the one or more regions in the AR environment by placing the overlay on the one or more regions of the first element at the determined orientation in the AR environment. 12. The apparatus of claim 11, wherein the method further comprises: determining that the first element is needed for the step; and projecting a hologram of the first element prior to locating the first element in an operating range of the apparatus. 13. The apparatus of claim 11, wherein the overlay includes a distinct mark for a respective region of the one or more regions of the first element, wherein the distinct mark indicates how the region fits with one or more other elements of the object. 14. The apparatus of claim 11, wherein the method further comprises: determining whether a tool or a fastener is needed for the step; determining that the tool or the fastener is in an operating range of the apparatus; and highlighting the tool or the fastener in the AR environment. 15. The apparatus of claim 11, wherein the method further comprises: determining one or more elements of the object that are attachable to the first element; and projecting a hologram depicting the first element attached to the one or more elements in the AR environment. 16. The apparatus of claim 11, wherein the method further comprises enhancing a resolution of an image of the first element in the AR environment, wherein the enhancing include increasing, within the AR environment, prominence of a symmetry-breaking feature of the first element. 17. The apparatus of claim 11, wherein the method further comprises: determining a first region of the one or more regions of the first element, wherein the first region is attachable to a second region of a second element of the object; and setting a same mark in the overlay for the first and second regions; and wherein placing the overlay comprises placing the same mark on the first and second regions. 18. The apparatus of claim 11, wherein the method further comprises: determining whether the apparatus is capable of determining that the step is complete; in response to the apparatus determining that the step is complete, detecting a third element of the object using the object detector, wherein the third element is associated with a subsequent step of the step; and in response to the apparatus being unable to determine that the step is complete, waiting for an instruction from the user. 19. The apparatus of claim 11, wherein the method further comprises: obtaining a three-dimensional model of the first element; identifying the one or more regions in the three-dimensional model; and placing the overlay based on the identified one or more regions in the three-dimensional model. 20. The apparatus of claim 11, wherein the method further comprises determining the step of the task from a task model that includes one or more steps for completing the task, and wherein one or more elements of the object associated with a respective step of the one or more steps.
2,600
10,847
10,847
15,231,530
2,627
A portable device is configured to perform a touch input method. The method includes determining whether an input for selecting a specific divided area is detected in a divided touch area on a screen divided into a plurality of areas. The method also includes, if the input for selecting the specific divided area is detected, moving the selected specific divided area to the divided touch area. The method further includes, if a specific input occurs in the selected specific divided area moved to the divided touch area, performing a function of an item indicated by the specific input.
1. An electronic device, the electronic device comprising: a display; and a controller configured to: display a user interface screen having a text input portion via the display, in response to a first user input, move the user interface screen downward such that the text input portion is relocated from a first area to a second area of the display, the second area being lower than the first area, and in response to a second user input detected in the text input portion of the user interface screen, move the user interface screen upward such that the text input portion is relocated from the second area to the first area of the display with a keyboard provided in a third area of the display, the third area being lower than the first area. 2. The electronic device of claim 1, wherein the first user input comprises a predefined pattern input. 3. The electronic device of claim 2, wherein the predefined pattern comprises a tap input tapping a predefined portion of the electronic device at least two times. 4. The electronic device of claim 1, wherein the second user input comprises a touch input on the text input portion. 5. The electronic device of claim 1, wherein the controller is configured to control the display to display an upper portion of the user interface screen by moving the user interface screen. 6. The electronic device of claim 5, wherein the controller is configured to control the display to hide a portion other than the upper portion of the user interface screen from the display by moving the user interface screen. 7. The electronic device of claim 1, wherein the controller is configured to control the display to display the user interface screen by executing an application. 8. The electronic device of claim 1, wherein the first area is an initial area at which the text input portion is first displayed in the user interface screen. 9. The electronic device of claim 1, wherein a distance between the first area and the second area corresponds to a distance by which the user interface screen is moved. 10. The electronic device of claim 1, wherein the third area corresponds to a lower portion of the display, and wherein the keyboard overlaps with a lower portion of the user interface screen. 11. The electronic device of claim 1, wherein the keyboard is a QWERTY keyboard. 12. A portable device, the portable device comprising: a touch screen configured to display images and content; a controller configured to: detect a first touch input on an input interface of the portable device when a display screen is displayed on the touch screen, wherein the first touch input comprises a predetermined input; relocate at least a portion of the display screen to a lower part of the touch screen and display a display window on an upper part of the touch screen; detect a second touch input on the touch screen of the portable device; and relocate the at least a portion of the display screen to the upper part of the touch screen. 13. The portable device of claim 12, wherein the predetermined input comprises a number of touches. 14. The portable device of claim 12, wherein, in response to detecting the predetermined input, the controller is configured to move the content portion of the display screen and the text input portion of the display screen to one of the two parts. 15. The portable device of claim 14, wherein, after detecting the predetermined input, the text input portion of the display screen is displayed in the other of the two parts. 16. The portable device of claim 12, wherein, if the predetermined input is detected, the controller is configured to move the display screen downward from the upper part of the touch screen to the lower part of the touch screen. 17. The portable device of claim 12, wherein the touch screen comprises a keyboard configured to receive a text, numerical or symbol input. 18. The portable device of claim 12, wherein, after the display screen is moved, if another touch is detected input via the input interface, a size of a display area of the touch screen returns to an original size. 19. A portable device, the portable device comprising: a touch screen configured to display content; and a controller configured to: detect a first touch input via a touch input unit of the portable device when a display screen is displayed on the touch screen; relocate at least a portion of the display screen to a lower part of the touch screen; detect a second touch input on a text input portion displayed on the touch screen; and in response to the second touch input, display a keypad interface on the touch screen, the keypad interface configured to enable a user to select text for input into the text input portion. 20. The portable device of claim 19, wherein the first touch input comprises a number of touches. 21. The portable device of claim 19, wherein, wherein, in response to detecting the predetermined input, the controller is configured to move the content portion of the display screen and the text input portion of the display screen to one of the two parts; and wherein, after detecting the predetermined input, the text input portion of the display screen is displayed in the other of the two parts. 22. The portable device of claim 19, wherein, if the first touch input is detected, the controller is configured to move the display screen downward from the upper part of the touch screen to the lower part of the touch screen. 23. The portable device of claim 19, wherein the touch screen comprises a keyboard configured to receive a text, numerical or symbol input. 24. The portable device of claim 19, wherein, after the display screen is moved, if another touch is detected input via the input interface, a size of a display area of the touch screen returns to an original size.
A portable device is configured to perform a touch input method. The method includes determining whether an input for selecting a specific divided area is detected in a divided touch area on a screen divided into a plurality of areas. The method also includes, if the input for selecting the specific divided area is detected, moving the selected specific divided area to the divided touch area. The method further includes, if a specific input occurs in the selected specific divided area moved to the divided touch area, performing a function of an item indicated by the specific input.1. An electronic device, the electronic device comprising: a display; and a controller configured to: display a user interface screen having a text input portion via the display, in response to a first user input, move the user interface screen downward such that the text input portion is relocated from a first area to a second area of the display, the second area being lower than the first area, and in response to a second user input detected in the text input portion of the user interface screen, move the user interface screen upward such that the text input portion is relocated from the second area to the first area of the display with a keyboard provided in a third area of the display, the third area being lower than the first area. 2. The electronic device of claim 1, wherein the first user input comprises a predefined pattern input. 3. The electronic device of claim 2, wherein the predefined pattern comprises a tap input tapping a predefined portion of the electronic device at least two times. 4. The electronic device of claim 1, wherein the second user input comprises a touch input on the text input portion. 5. The electronic device of claim 1, wherein the controller is configured to control the display to display an upper portion of the user interface screen by moving the user interface screen. 6. The electronic device of claim 5, wherein the controller is configured to control the display to hide a portion other than the upper portion of the user interface screen from the display by moving the user interface screen. 7. The electronic device of claim 1, wherein the controller is configured to control the display to display the user interface screen by executing an application. 8. The electronic device of claim 1, wherein the first area is an initial area at which the text input portion is first displayed in the user interface screen. 9. The electronic device of claim 1, wherein a distance between the first area and the second area corresponds to a distance by which the user interface screen is moved. 10. The electronic device of claim 1, wherein the third area corresponds to a lower portion of the display, and wherein the keyboard overlaps with a lower portion of the user interface screen. 11. The electronic device of claim 1, wherein the keyboard is a QWERTY keyboard. 12. A portable device, the portable device comprising: a touch screen configured to display images and content; a controller configured to: detect a first touch input on an input interface of the portable device when a display screen is displayed on the touch screen, wherein the first touch input comprises a predetermined input; relocate at least a portion of the display screen to a lower part of the touch screen and display a display window on an upper part of the touch screen; detect a second touch input on the touch screen of the portable device; and relocate the at least a portion of the display screen to the upper part of the touch screen. 13. The portable device of claim 12, wherein the predetermined input comprises a number of touches. 14. The portable device of claim 12, wherein, in response to detecting the predetermined input, the controller is configured to move the content portion of the display screen and the text input portion of the display screen to one of the two parts. 15. The portable device of claim 14, wherein, after detecting the predetermined input, the text input portion of the display screen is displayed in the other of the two parts. 16. The portable device of claim 12, wherein, if the predetermined input is detected, the controller is configured to move the display screen downward from the upper part of the touch screen to the lower part of the touch screen. 17. The portable device of claim 12, wherein the touch screen comprises a keyboard configured to receive a text, numerical or symbol input. 18. The portable device of claim 12, wherein, after the display screen is moved, if another touch is detected input via the input interface, a size of a display area of the touch screen returns to an original size. 19. A portable device, the portable device comprising: a touch screen configured to display content; and a controller configured to: detect a first touch input via a touch input unit of the portable device when a display screen is displayed on the touch screen; relocate at least a portion of the display screen to a lower part of the touch screen; detect a second touch input on a text input portion displayed on the touch screen; and in response to the second touch input, display a keypad interface on the touch screen, the keypad interface configured to enable a user to select text for input into the text input portion. 20. The portable device of claim 19, wherein the first touch input comprises a number of touches. 21. The portable device of claim 19, wherein, wherein, in response to detecting the predetermined input, the controller is configured to move the content portion of the display screen and the text input portion of the display screen to one of the two parts; and wherein, after detecting the predetermined input, the text input portion of the display screen is displayed in the other of the two parts. 22. The portable device of claim 19, wherein, if the first touch input is detected, the controller is configured to move the display screen downward from the upper part of the touch screen to the lower part of the touch screen. 23. The portable device of claim 19, wherein the touch screen comprises a keyboard configured to receive a text, numerical or symbol input. 24. The portable device of claim 19, wherein, after the display screen is moved, if another touch is detected input via the input interface, a size of a display area of the touch screen returns to an original size.
2,600
10,848
10,848
16,008,557
2,649
A portable telephone apparatus includes an external separate assembly acting as a power storage device with charger circuit and as a satellite transmitter/receiver unit for text communications in the event that the cell signal for the telephone is not available. An app uses the processing capabilities of the cellular telephone to create communication data where many functions of the satellite transmitter/receiver of the external assembly are controlled by the cellular telephone app to reduce the functions in the external device to receiving and transmitting capability and a controller to queue data and control transmission and reception. The the external assembly operates both the functions of a satellite transmitter/receiver unit and the power storage device simultaneously and independently of one another without requirement for switching.
1. A portable telephone apparatus comprising: a cellular telephone component comprising a processor providing processing capabilities and geographic positioning capabilities, a camera, a keypad, wireless accessory connections, a speaker, a microphone and a cellular telephone system for communication with a cellular telephone network; the cellular telephone component including an operating system capable of running a cellular telephone app; and an external assembly separate from the cellular telephone component comprising: a power storage device with charger circuit, to provide a function of a backup battery to the cellular telephone component through a connecting cable that can charge the cellular telephone component and power internal circuits of the cellular telephone component; a satellite transmitter/receiver unit with an antenna for communication with a satellite communication network using Low Earth Orbit (LEO) satellites; and wired and/or wireless link to the cellular telephone component; where the app uses the processing capabilities of the cellular telephone component to create communication data and/or audio signals for transmission and presenting of received data to a cellular telephone user as audio streams or data messages; where many functions of the satellite transmitter/receiver of the external assembly are controlled by the cellular telephone app, so that only a minimum number of functions are embodied in the external assembly including receiving capability, transmitting capability, and a controller to queue data and control transmission and reception, where the app automatically chooses whether to use either the cellular telephone system of the cellular telephone component or the external satellite communication network; and where the external assembly conducts, using power from the power storage device, both the function of the satellite transmitter/receiver unit and the function of the backup battery to the cellular telephone component both simultaneously and independently of one another. 2. The apparatus according to claim 1 wherein where the app uses the processing capabilities of the cellular telephone component to inform the cellular telephone user if the cellular telephone network is available and if the satellite communication network is available. 3. The apparatus according to claim 1 wherein the controller of the external assembly is arranged to archive data and audio to be transmitted in the future over a satellite communication network if the satellite communication network cannot be contacted at that moment. 4. The apparatus according to claim 1 wherein the app uses the processing capabilities of the cellular telephone component to advise the cellular telephone user if the satellite communication network is available for transmission. 5. The apparatus according to claim 1 where the app uses the processing capabilities of the cellular telephone component to estimate on how much time it will be until the satellite communication network is available for transmission. 6. The apparatus according to claim 1 wherein the external assembly uses a wireless link for 2-way communication link to the cellular telephone component. 7. The apparatus according to claim 1 wherein the app uses the processing capabilities of the cellular telephone component to connect to the external assembly whenever the external assembly is powered on and either connected directly by cable or wirelessly without any intervention by the cellular telephone user. 8. The apparatus according to claim 1 wherein the app uses the processing capabilities of the cellular telephone component to appear as a “text messaging” app that sends and receive text messages through the cellular telephone network or through the external assembly automatically without input from the cellular telephone user. 9. The apparatus according to claim 1 where the app uses the processing capabilities of the cellular telephone component to allow the cellular telephone user to set preferences to use the cellular telephone network or the satellite communication network only, or to default to one of the satellite communication network and cellular telephone network as a first priority and use the other of the satellite communication network and cellular telephone network when needed. 10. The apparatus according to claim 1 wherein a switch is provided in the external assembly which, when actuated, sends a distress signal over the satellite communication network.
A portable telephone apparatus includes an external separate assembly acting as a power storage device with charger circuit and as a satellite transmitter/receiver unit for text communications in the event that the cell signal for the telephone is not available. An app uses the processing capabilities of the cellular telephone to create communication data where many functions of the satellite transmitter/receiver of the external assembly are controlled by the cellular telephone app to reduce the functions in the external device to receiving and transmitting capability and a controller to queue data and control transmission and reception. The the external assembly operates both the functions of a satellite transmitter/receiver unit and the power storage device simultaneously and independently of one another without requirement for switching.1. A portable telephone apparatus comprising: a cellular telephone component comprising a processor providing processing capabilities and geographic positioning capabilities, a camera, a keypad, wireless accessory connections, a speaker, a microphone and a cellular telephone system for communication with a cellular telephone network; the cellular telephone component including an operating system capable of running a cellular telephone app; and an external assembly separate from the cellular telephone component comprising: a power storage device with charger circuit, to provide a function of a backup battery to the cellular telephone component through a connecting cable that can charge the cellular telephone component and power internal circuits of the cellular telephone component; a satellite transmitter/receiver unit with an antenna for communication with a satellite communication network using Low Earth Orbit (LEO) satellites; and wired and/or wireless link to the cellular telephone component; where the app uses the processing capabilities of the cellular telephone component to create communication data and/or audio signals for transmission and presenting of received data to a cellular telephone user as audio streams or data messages; where many functions of the satellite transmitter/receiver of the external assembly are controlled by the cellular telephone app, so that only a minimum number of functions are embodied in the external assembly including receiving capability, transmitting capability, and a controller to queue data and control transmission and reception, where the app automatically chooses whether to use either the cellular telephone system of the cellular telephone component or the external satellite communication network; and where the external assembly conducts, using power from the power storage device, both the function of the satellite transmitter/receiver unit and the function of the backup battery to the cellular telephone component both simultaneously and independently of one another. 2. The apparatus according to claim 1 wherein where the app uses the processing capabilities of the cellular telephone component to inform the cellular telephone user if the cellular telephone network is available and if the satellite communication network is available. 3. The apparatus according to claim 1 wherein the controller of the external assembly is arranged to archive data and audio to be transmitted in the future over a satellite communication network if the satellite communication network cannot be contacted at that moment. 4. The apparatus according to claim 1 wherein the app uses the processing capabilities of the cellular telephone component to advise the cellular telephone user if the satellite communication network is available for transmission. 5. The apparatus according to claim 1 where the app uses the processing capabilities of the cellular telephone component to estimate on how much time it will be until the satellite communication network is available for transmission. 6. The apparatus according to claim 1 wherein the external assembly uses a wireless link for 2-way communication link to the cellular telephone component. 7. The apparatus according to claim 1 wherein the app uses the processing capabilities of the cellular telephone component to connect to the external assembly whenever the external assembly is powered on and either connected directly by cable or wirelessly without any intervention by the cellular telephone user. 8. The apparatus according to claim 1 wherein the app uses the processing capabilities of the cellular telephone component to appear as a “text messaging” app that sends and receive text messages through the cellular telephone network or through the external assembly automatically without input from the cellular telephone user. 9. The apparatus according to claim 1 where the app uses the processing capabilities of the cellular telephone component to allow the cellular telephone user to set preferences to use the cellular telephone network or the satellite communication network only, or to default to one of the satellite communication network and cellular telephone network as a first priority and use the other of the satellite communication network and cellular telephone network when needed. 10. The apparatus according to claim 1 wherein a switch is provided in the external assembly which, when actuated, sends a distress signal over the satellite communication network.
2,600
10,849
10,849
16,361,284
2,613
A display position of an image is moved in accordance with positional information of a display device having a curved display surface. Displacement of a display device is sensed by a camera portion and an acceleration sensor, and the display position is determined in accordance with the displacement, so that the image is displayed in the display position. In the case where the display device rotates and the like, a desired piece of information can be displayed automatically in a display region that can be easily seen.
1. A display device comprising: a first camera portion configured to capture an image of a face of a user of the display device; and a display portion configured to display an image in a display position determined based on a position of the face of the user in the captured image and an area of the face of the user in the captured image. 2. The display device according to claim 1, wherein the display device is a wearable device. 3. The display device according to claim 1, further comprising: a second camera portion whose an imaging range is outside an imaging range of the first camera portion. 4. The display device according to claim 1, wherein the display portion is configured to display an image with a first luminance in a front face and with a second luminance lower than the first luminance in an outer peripheral surface. 5. A display device comprising: a first camera portion configured to capture an image of a face of a user of the display device; and a display portion configured to display an image in a display position determined based on a position of the face of the user in the captured image and not based on an area of the face of the user in the captured image, wherein the display device is a wearable device. 6. The display device according to claim 5, further comprising: a second camera portion whose an imaging range is outside an imaging range of the first camera portion. 7. The display device according to claim 5, wherein the display portion configured to display an image with a first luminance in a front face and with a second luminance lower than the first luminance in an outer peripheral surface.
A display position of an image is moved in accordance with positional information of a display device having a curved display surface. Displacement of a display device is sensed by a camera portion and an acceleration sensor, and the display position is determined in accordance with the displacement, so that the image is displayed in the display position. In the case where the display device rotates and the like, a desired piece of information can be displayed automatically in a display region that can be easily seen.1. A display device comprising: a first camera portion configured to capture an image of a face of a user of the display device; and a display portion configured to display an image in a display position determined based on a position of the face of the user in the captured image and an area of the face of the user in the captured image. 2. The display device according to claim 1, wherein the display device is a wearable device. 3. The display device according to claim 1, further comprising: a second camera portion whose an imaging range is outside an imaging range of the first camera portion. 4. The display device according to claim 1, wherein the display portion is configured to display an image with a first luminance in a front face and with a second luminance lower than the first luminance in an outer peripheral surface. 5. A display device comprising: a first camera portion configured to capture an image of a face of a user of the display device; and a display portion configured to display an image in a display position determined based on a position of the face of the user in the captured image and not based on an area of the face of the user in the captured image, wherein the display device is a wearable device. 6. The display device according to claim 5, further comprising: a second camera portion whose an imaging range is outside an imaging range of the first camera portion. 7. The display device according to claim 5, wherein the display portion configured to display an image with a first luminance in a front face and with a second luminance lower than the first luminance in an outer peripheral surface.
2,600
10,850
10,850
15,213,914
2,621
System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures control the various function of the controlled unit, such as a television set.
1. A device for use with a Local Area Network (LAN) cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: displaying means for visually displaying information; a light sensor having an output responsive to the sensed light, the light sensor being mechanically fixed so that it maintains a fixed position relative to the displaying means; a LAN connector for connecting to the LAN cable; a transceiver coupled to the LAN connector for transmitting the digital data to the LAN cable; software and a processor to execute the software coupled to control the display means and the transceiver; and a single enclosure housing the displaying means, the light sensor, the processor, the LAN connector, and the transceiver. 2. The device according to claim 1, wherein the light sensor output is responsive to light in a non-visible light spectrum. 3. The device according to claim 2, wherein the non-visible light spectrum comprises infrared or violet spectrum. 4. The device according to claim 1, wherein the light sensor is part of, or comprises, a digital video camera for capturing digital video data, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the displaying means. 5. The device according to claim 1, wherein the light sensor is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) element. 6. The device according to claim 1, wherein the displaying means consists of, or comprises, a flat screen display or a video projector. 7. The device according to claim 6, wherein the displaying means uses DLP. 8. The device according to claim 1, wherein the displaying means is silicon-based. 9. The device according to claim 1, wherein the displaying means is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 10. The device according to claim 1, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video by the displaying means. 11. The device according to claim 1, further operative to at least in part be powered from the DC power. 12. The device according to claim 1, further comprising a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 13. The device according to claim 12, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the LAN cable. 14. The device according to claim 12, wherein the third port is coupled for powering the displaying means from the DC power. 15. The device according to claim 12, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 16. The device according to claim 15, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 17. The device according to claim 15, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 18. The device according to claim 1, wherein the transceiver is coupled to the displaying means for receiving information from the LAN cable and for displaying the information by the displaying means. 19. The device according to claim 1, wherein the light sensor is positioned to capture a scene substantially in front of the displaying means. 20. The device according to claim 1, wherein the light sensor comprises, or consists of, a digital video camera that comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 21. The device according to claim 1, wherein the transceiver comprises a LAN transceiver. 22. The device according to claim 21, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 23. The device according to claim 22, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 24. The device according to claim 1, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 25. The device according to claim 1, further for initiating and receiving telephone calls over a telephone network. 26. The device according to claim 25, wherein the telephone network is a cellular telephone network. 27. The device according to claim 26, further for initiating and receiving telephone calls over a cellular network, the device further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 28. The device according to claim 27, further comprising, or consisting of, a cellular telephone device. 29. The device according to claim 27, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 30. The device according to claim 27, wherein the cellular modem is coupled to the displaying means for receiving information from the cellular network and displaying the received information by the displaying means. 31. A digital camera device for use with a cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a light emitter for emitting light; a digital video camera for capturing digital video data in a non-visible light spectrum, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the light emitter; a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports; a connector coupled to the first port for connecting to the cable; a transceiver coupled between the second port and the digital video camera for transmitting the digital video data to the cable; software and a processor to execute the software coupled to control the light emitter, the transceiver, and the digital video camera; and a single enclosure housing the digital video camera, the processor, the connector, the power/data splitter, the light emitter, and the transceiver. 32. The device according to claim 31, wherein the non-visible light spectrum comprises infrared or violet spectrum. 33. The device according to claim 31, wherein the light emitter comprises a Light emitting Diode (LED). 34. The device according to claim 31, wherein the light emitter is part of a flat screen for visually displaying information. 35. The device according to claim 34, wherein the transceiver is coupled to the light emitter for receiving information from the cable and for displaying the information on the flat screen. 36. The device according to claim 34, further for receiving and displaying television channels, wherein the flat screen is configured for displaying the television channels. 37. The device according to claim 36, further comprising, or consisting of, a television set. 38. The device according to claim 34, wherein the flat screen is silicon-based. 39. The device according to claim 38, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display) or TFT (Thin-Film Transistor) based. 40. The device according to claim 34, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video on the flat screen. 41. The device according to claim 31, further operative to at least in part be powered from the DC power. 42. The device according to claim 41, wherein the third port is coupled to the light emitter for powering the light emitter from the DC power. 43. The device according to claim 31, further comprising an image processor coupled to receive the digital video data from the digital video camera for applying an element detection algorithm to detect the element in the digital video data, and wherein the device responds to the element detection. 44. The device according to claim 31, wherein the light emitter is positioned to illuminate a scene substantially captured by the digital video camera. 45. The device according to claim 31, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 46. The device according to claim 45, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 47. The device according to claim 31, wherein the cable comprises a Local Area Network (LAN) cable, the connector comprises a LAN connector, and the transceiver comprises a LAN transceiver. 48. The device according to claim 47, wherein the LAN is an Ethernet based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 49. The device according to claim 48, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 50. The device according to claim 47, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 51. The device according to claim 31, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the cable. 52. The device according to claim 31, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 53. The device according to claim 52, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 54. The device according to claim 52, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 55. The device according to claim 31, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 56. The device according to claim 55, wherein the compression scheme is lossy or lossless type. 57. The device according to claim 55, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 58. The device according to claim 31, further for initiating and receiving telephone calls over a telephone network. 59. The device according to claim 58, wherein the telephone network is a cellular telephone network. 60. The device according to claim 59, further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 61. The device according to claim 60, further comprising, or consisting of, a cellular telephone device. 62. The device according to claim 60, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 63. The device according to claim 60, further comprises a flat screen that comprises the light emitter, wherein the cellular modem is coupled to the flat screen for receiving information from the cellular network and displaying the received information on the flat screen.
System and method for control using face detection or hand gesture detection algorithms in a captured image. Based on the existence of a detected human face or a hand gesture in an image captured by a digital camera (still or video), a control signal is generated and provided to a device. The control may provide power or disconnect power supply to the device (or part of the device circuits). Further, the location of the detected face in the image may be used to rotate a display screen horizontally, vertically or both, to achieve a better line of sight with a viewing person. If two or more faces are detected, the average location is calculated and used for line of sight correction. A linear feedback control loop is implemented wherein detected face deviation from the optimum is the error to be corrected by rotating the display to the required angular position. A hand gesture detection can be used as a replacement to a remote control wherein the various hand gestures control the various function of the controlled unit, such as a television set.1. A device for use with a Local Area Network (LAN) cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: displaying means for visually displaying information; a light sensor having an output responsive to the sensed light, the light sensor being mechanically fixed so that it maintains a fixed position relative to the displaying means; a LAN connector for connecting to the LAN cable; a transceiver coupled to the LAN connector for transmitting the digital data to the LAN cable; software and a processor to execute the software coupled to control the display means and the transceiver; and a single enclosure housing the displaying means, the light sensor, the processor, the LAN connector, and the transceiver. 2. The device according to claim 1, wherein the light sensor output is responsive to light in a non-visible light spectrum. 3. The device according to claim 2, wherein the non-visible light spectrum comprises infrared or violet spectrum. 4. The device according to claim 1, wherein the light sensor is part of, or comprises, a digital video camera for capturing digital video data, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the displaying means. 5. The device according to claim 1, wherein the light sensor is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) element. 6. The device according to claim 1, wherein the displaying means consists of, or comprises, a flat screen display or a video projector. 7. The device according to claim 6, wherein the displaying means uses DLP. 8. The device according to claim 1, wherein the displaying means is silicon-based. 9. The device according to claim 1, wherein the displaying means is LED (Light Emitting Diode), LCD (Liquid Crystal Display), or TFT (Thin-Film Transistor) based. 10. The device according to claim 1, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video by the displaying means. 11. The device according to claim 1, further operative to at least in part be powered from the DC power. 12. The device according to claim 1, further comprising a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports, the first port coupled to the LAN connector, and the second port coupled to the transceiver. 13. The device according to claim 12, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the LAN cable. 14. The device according to claim 12, wherein the third port is coupled for powering the displaying means from the DC power. 15. The device according to claim 12, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 16. The device according to claim 15, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 17. The device according to claim 15, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 18. The device according to claim 1, wherein the transceiver is coupled to the displaying means for receiving information from the LAN cable and for displaying the information by the displaying means. 19. The device according to claim 1, wherein the light sensor is positioned to capture a scene substantially in front of the displaying means. 20. The device according to claim 1, wherein the light sensor comprises, or consists of, a digital video camera that comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 21. The device according to claim 1, wherein the transceiver comprises a LAN transceiver. 22. The device according to claim 21, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 23. The device according to claim 22, wherein the LAN is an Ethernet-based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 24. The device according to claim 1, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 25. The device according to claim 1, further for initiating and receiving telephone calls over a telephone network. 26. The device according to claim 25, wherein the telephone network is a cellular telephone network. 27. The device according to claim 26, further for initiating and receiving telephone calls over a cellular network, the device further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 28. The device according to claim 27, further comprising, or consisting of, a cellular telephone device. 29. The device according to claim 27, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 30. The device according to claim 27, wherein the cellular modem is coupled to the displaying means for receiving information from the cellular network and displaying the received information by the displaying means. 31. A digital camera device for use with a cable simultaneously carrying DC power and bi-directional serial digital data over the same wires, the device comprising: a light emitter for emitting light; a digital video camera for capturing digital video data in a non-visible light spectrum, the digital video camera having a center line of sight and being mechanically fixed so that the digital video camera is maintained in a fixed position relative to the light emitter; a power/data splitter having first, second, and third ports for passing the bi-directional serial digital data between the first and second ports and for passing the DC power between the first and third ports; a connector coupled to the first port for connecting to the cable; a transceiver coupled between the second port and the digital video camera for transmitting the digital video data to the cable; software and a processor to execute the software coupled to control the light emitter, the transceiver, and the digital video camera; and a single enclosure housing the digital video camera, the processor, the connector, the power/data splitter, the light emitter, and the transceiver. 32. The device according to claim 31, wherein the non-visible light spectrum comprises infrared or violet spectrum. 33. The device according to claim 31, wherein the light emitter comprises a Light emitting Diode (LED). 34. The device according to claim 31, wherein the light emitter is part of a flat screen for visually displaying information. 35. The device according to claim 34, wherein the transceiver is coupled to the light emitter for receiving information from the cable and for displaying the information on the flat screen. 36. The device according to claim 34, further for receiving and displaying television channels, wherein the flat screen is configured for displaying the television channels. 37. The device according to claim 36, further comprising, or consisting of, a television set. 38. The device according to claim 34, wherein the flat screen is silicon-based. 39. The device according to claim 38, wherein the flat screen is LED (Light Emitting Diode), LCD (Liquid Crystal Display) or TFT (Thin-Film Transistor) based. 40. The device according to claim 34, further operative for displaying High Definition (HD), the device further comprising an HDMI (High-Definition Multimedia Interface) for receiving and displaying HD video on the flat screen. 41. The device according to claim 31, further operative to at least in part be powered from the DC power. 42. The device according to claim 41, wherein the third port is coupled to the light emitter for powering the light emitter from the DC power. 43. The device according to claim 31, further comprising an image processor coupled to receive the digital video data from the digital video camera for applying an element detection algorithm to detect the element in the digital video data, and wherein the device responds to the element detection. 44. The device according to claim 31, wherein the light emitter is positioned to illuminate a scene substantially captured by the digital video camera. 45. The device according to claim 31, wherein the digital video camera comprises: an optical lens for focusing received light; a photosensitive image sensor array disposed approximately at an image focal point plane of the optical lens for capturing an image and producing electronic image information representing the image; and an analog-to-digital (A/D) converter coupled to the image sensor for generating a digital data representation of the image. 46. The device according to claim 45, wherein the image sensor array is based on, or uses, Charge-Coupled Devices (CCD) or Complementary Metal-Oxide-Semiconductor (CMOS) elements. 47. The device according to claim 31, wherein the cable comprises a Local Area Network (LAN) cable, the connector comprises a LAN connector, and the transceiver comprises a LAN transceiver. 48. The device according to claim 47, wherein the LAN is an Ethernet based LAN that is according to, compatible with, or based on, IEEE 802.3-2008 standard. 49. The device according to claim 48, wherein the LAN cable is based on, or uses, twisted-pair copper wires, the LAN transceiver is according to, compatible with, or based on, 10Base-T, 100Base-TX, or 1000Base-T, and the LAN connector is RJ-45 type connector. 50. The device according to claim 47, wherein the DC power and the serial digital data are carried according to, compatible with, or based on, IEEE 802.3af-2003 or IEEE 802.3at-2009 standard. 51. The device according to claim 31, further for use with a power source that supply at least part of the DC power, wherein the third port is coupled to the power source for supplying the DC power to the cable. 52. The device according to claim 31, wherein the DC power and the serial digital data are carried using Frequency Division/Domain Multiplexing (FDM), where the serial digital data is carried in a frequency band above, and distinct from, the DC power. 53. The device according to claim 52, wherein the power/data splitter comprises a high pass filter between the first and second ports and a low pass filter between the first and third ports. 54. The device according to claim 52, wherein the power/data splitter comprises a transformer having windings and a capacitor connected between the transformer windings. 55. The device according to claim 31, further comprising a video compressor coupled between the digital video camera and the transceiver for compressing the captured digital video data according to a compression scheme. 56. The device according to claim 55, wherein the compression scheme is lossy or lossless type. 57. The device according to claim 55, wherein the compression scheme is according to, compatible with, or based on, JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group) standard. 58. The device according to claim 31, further for initiating and receiving telephone calls over a telephone network. 59. The device according to claim 58, wherein the telephone network is a cellular telephone network. 60. The device according to claim 59, further comprising: a cellular antenna for over-the-air radio-frequency communication; and a cellular modem coupled to the cellular antenna for transmitting serial digital data to, or receiving serial digital data from, the cellular telephone network. 61. The device according to claim 60, further comprising, or consisting of, a cellular telephone device. 62. The device according to claim 60, wherein the communication over the cellular network is according to, compatible with, or is based on, GSM (Global System for Mobile Communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), EDGE (Enhanced Data Rates for GSM Evolution), 3GSM, DECT (Digital Enhanced Cordless Telecommunications), Digital AMPS, or iDEN (Integrated Digital Enhanced Network). 63. The device according to claim 60, further comprises a flat screen that comprises the light emitter, wherein the cellular modem is coupled to the flat screen for receiving information from the cellular network and displaying the received information on the flat screen.
2,600
10,851
10,851
15,639,826
2,658
A vehicle includes an interface device, an in-vehicle control unit, a functional unit, and a processing circuitry. The interface device receives a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle, and receives background audio data concurrently with a portion of the spoken command. The in-cabin vehicle control unit separates the background audio data from the spoken command, and selects which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command. The functional unit controls a function within the vehicle. The processing circuitry stores, to a command buffer, data processed from the received spoken command, and controls, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone.
1. A vehicle comprising: an interface device configured to: receive a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; and receive background audio data concurrently with a portion of the spoken command; an in-cabin vehicle control unit, coupled to the interface device, the in-cabin vehicle control unit being configured to: separate the background audio data from the spoken command; and select which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; a functional unit, coupled to the in-cabin vehicle control unit, the functional unit being configured to control a function within the vehicle; processing circuitry coupled to the interface device, to the in-cabin vehicle control unit, and to the functional unit, the processing circuitry being configured to: store, to a command buffer, data processed from the received spoken command; and control, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone; and a memory device that implements the command buffer. 2. The vehicle of claim 1, the processing circuitry being further configured to cease controlling the functional unit using audio input received from a deactivated in-cabin vehicle zone of the two or more in-cabin vehicle zones prior to controlling the functional unit using the audio input received from selected in-cabin vehicle zone of the two or more in-cabin vehicle zones, based on the receipt of the spoken command at the interface device. 3. The vehicle of claim 1, the processing circuitry being further configured to: control the functional unit using the audio input received from the selected in-cabin vehicle zone prior to the receipt of the spoken command at the interface device; and continue to control the functional unit using the audio input received from the selected in-cabin vehicle zone after receipt of the spoken command at the interface device. 4. The vehicle of claim 1, further comprising a touchscreen integrated as part of the interface device, the touchscreen being configured to replace the interface device ability to receive a spoken command, and to receive a tactile input command to identify the in-cabin vehicle zone. 5. The vehicle of claim 1, further comprising a touchscreen integrated as part of the interface device, the touchscreen being configured to augment the interface device ability to receive a spoken command, and to receive a tactile input command to identify the in-cabin vehicle zone. 6. The vehicle of claim 1, wherein the functional unit comprises a thermostat controller configured to control a temperature within the selected in-cabin vehicle zone. 7. The vehicle of claim 1, wherein the functional unit comprises a noise cancellation (NC) system configured to suppress sounds outside of the selected in-cabin vehicle zone. 8. The vehicle of claim 7, further comprising loudspeakers integrated as part of the NC system, the loudspeakers being configured to render amplified sounds within the selected in-cabin vehicle zone. 9. The vehicle of claim 1, further comprising a separate microphone array coupled to the functional unit, the separate microphone array being configured to perform audio beamforming within the selected in-cabin vehicle zone. 10. The vehicle of claim 9, wherein at least one microphone of the separate microphone array, is located in the two or more in-cabin vehicle zones, and wherein the at least one microphone is configured to capture the spoken command from one of the two or more in-cabin vehicle zones. 11. The vehicle of claim 10, wherein the at least one microphone of the separate microphone array comprises an error microphone configured to capture noise data for an active noise cancellation (ANC) system of the vehicle. 12. The vehicle of claim 10, wherein the at least one microphone of the separate microphone array comprises an error microphone configured to capture noise data for an active noise cancellation (ANC) system of the vehicle, the separate microphone array further comprising at least one zone microphone configured to capture one or more command inputs from a respective in-cabin zone of the two or more in-cabin zones that is associated with the zone microphone, the zone microphone being configured to capture directional information associated with the command inputs. 13. The vehicle of claim 1, further comprising a steering wheel positioned within a respective in-cabin vehicle zone of the two or more in-cabin vehicle zones, wherein the interface device is positioned in the respective in-cabin vehicle zone in which the steering wheel is positioned. 14. The vehicle of claim 13, wherein the in-cabin vehicle control unit is configured to select which in-cabin vehicle zone is identified by the spoken command originating in in-cabin vehicle zones of the two or more in-cabin vehicle zones that are positioned behind the respective in-cabin vehicle zone in which the steering wheel is positioned. 15. The vehicle of claim 1, the processing circuitry being further configured to suppress audio input received from respective microphones of all in-cabin vehicle zones other than the selected in-cabin vehicle zone. 16. The vehicle of claim 1, the processing circuitry being further configured to amplify audio input received from a respective microphone of the selected in-cabin vehicle zone. 17. The vehicle of claim 1, the processing circuitry being further configured to: identify respective voice information associated with audio input received from respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone; determine that any portion of the audio input that is received from a respective microphone of the selected in-cabin vehicle zone and is associated with the identified voice information received from the respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone comprises noise with respect to the selected in-cabin vehicle zone; apply, based on the determination, noise cancellation to the portion of the audio input that comprises the noise with respect to the selected in-cabin vehicle zone to obtain noise-cancelled audio input associated with the selected in-cabin vehicle zone; and control the functional unit using the noise-cancelled audio input associated with the selected in-cabin vehicle zone. 18. The vehicle of claim 1, wherein the background audio data comprises audio data associated with a phone call that occurs outside of the selected in-cabin vehicle zone, and wherein the processing circuitry is further configured to generate a response to the spoken command, the vehicle further comprising one or more parametric speakers configured to render the response as an auditory response. 19. A method comprising: receiving, at an interface device of a vehicle, a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; receiving, at the interface device, background audio data concurrently with a portion of the spoken command; separating, by an in-cabin vehicle control unit coupled to the interface device, the background audio data from the spoken command; selecting, by the in-cabin vehicle control unit, which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; storing, to a memory device, by processing circuitry coupled to the interface device, to the in-cabin vehicle control unit, and to a functional unit of the vehicle, data processed from the received spoken command; controlling, by the processing circuitry, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone; and controlling, by the functional unit of the vehicle, a function within the vehicle. 20. The method of claim 19, further comprising ceasing, by the processing circuitry, controlling the functional unit using audio input received from a deactivated in-cabin vehicle zone of the two or more in-cabin vehicle zones prior to controlling the functional unit using the audio input received from selected in-cabin vehicle zone of the two or more in-cabin vehicle zones, based on the receipt of the spoken command at the interface device. 21. The method of claim 19, further comprising: controlling, by the processing circuitry, the functional unit using the audio input received from the selected in-cabin vehicle zone prior to the receipt of the spoken command at the interface device; and continuing, by the processing circuitry, to control the functional unit using the audio input received from the selected in-cabin vehicle zone after receipt of the spoken command at the interface device. 22. The method of claim 19, further comprising: replacing, by a touchscreen integrated as part of the interface device, the interface device ability to receive a spoken command; and receiving, by the touchscreen, a tactile input command to identify the in-cabin vehicle zone. 23. The method of claim 19, further comprising: augmenting, by a touchscreen integrated as part of the interface device, the interface device ability to receive a spoken command; and receiving, by the touchscreen, a tactile input command to identify the in-cabin vehicle zone. 24. The method of claim 19, further comprising performing beamforming, by a separate microphone array coupled to the functional unit, within the selected in-cabin vehicle zone. 25. The method of claim 19, further comprising suppressing, by the processing circuitry, audio input received from respective microphones of all in-cabin vehicle zones other than the selected in-cabin vehicle zone. 26. The method of claim 19, further comprising amplifying audio input received from a respective microphone of the selected in-cabin vehicle zone. 27. The method of claim 19, further comprising: identifying, by the processing circuitry, respective voice information associated with audio input received from respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone; determining, by the processing circuitry, that any portion of the audio input that is received from a respective microphone of the selected in-cabin vehicle zone and is associated with the identified voice information received from the respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone comprises noise with respect to the selected in-cabin vehicle zone; applying, by the processing circuitry, noise cancellation to the portion of the audio input that comprises the noise with respect to the selected in-cabin vehicle zone to obtain noise-cancelled audio input associated with the selected in-cabin vehicle zone based on the determination; and controlling, by the processing circuitry, the functional unit using the noise-cancelled audio input associated with the selected in-cabin vehicle zone. 28. The method of claim 19, wherein the background audio data comprises audio data associated with a phone call that occurs outside of the selected in-cabin vehicle zone, the method further comprising: generating, by the processing circuitry, a response to the spoken command; and rendering, via one or more parametric speakers, the response as an auditory response. 29. An apparatus comprising: means for receiving, via an interface device, a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; means for receiving, via the interface device, background audio data concurrently with a portion of the spoken command; means for separating the background audio data from the spoken command; means for selecting which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; means for storing, to a memory device, data processed from the received spoken command; and means for controlling, based on the data processed from the received spoken command, a functional unit using audio input received from the selected in-cabin vehicle zone. 30. A computer-readable storage medium encoded with instructions that, when executed, cause processing circuitry of a vehicle to: receive, via an interface device of the vehicle, a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; receive, via the interface device, background audio data concurrently with a portion of the spoken command; separate the background audio data from the spoken command; select which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; store data processed from the received spoken command; and control, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone; and cause the functional unit of the vehicle to a function within the vehicle.
A vehicle includes an interface device, an in-vehicle control unit, a functional unit, and a processing circuitry. The interface device receives a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle, and receives background audio data concurrently with a portion of the spoken command. The in-cabin vehicle control unit separates the background audio data from the spoken command, and selects which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command. The functional unit controls a function within the vehicle. The processing circuitry stores, to a command buffer, data processed from the received spoken command, and controls, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone.1. A vehicle comprising: an interface device configured to: receive a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; and receive background audio data concurrently with a portion of the spoken command; an in-cabin vehicle control unit, coupled to the interface device, the in-cabin vehicle control unit being configured to: separate the background audio data from the spoken command; and select which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; a functional unit, coupled to the in-cabin vehicle control unit, the functional unit being configured to control a function within the vehicle; processing circuitry coupled to the interface device, to the in-cabin vehicle control unit, and to the functional unit, the processing circuitry being configured to: store, to a command buffer, data processed from the received spoken command; and control, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone; and a memory device that implements the command buffer. 2. The vehicle of claim 1, the processing circuitry being further configured to cease controlling the functional unit using audio input received from a deactivated in-cabin vehicle zone of the two or more in-cabin vehicle zones prior to controlling the functional unit using the audio input received from selected in-cabin vehicle zone of the two or more in-cabin vehicle zones, based on the receipt of the spoken command at the interface device. 3. The vehicle of claim 1, the processing circuitry being further configured to: control the functional unit using the audio input received from the selected in-cabin vehicle zone prior to the receipt of the spoken command at the interface device; and continue to control the functional unit using the audio input received from the selected in-cabin vehicle zone after receipt of the spoken command at the interface device. 4. The vehicle of claim 1, further comprising a touchscreen integrated as part of the interface device, the touchscreen being configured to replace the interface device ability to receive a spoken command, and to receive a tactile input command to identify the in-cabin vehicle zone. 5. The vehicle of claim 1, further comprising a touchscreen integrated as part of the interface device, the touchscreen being configured to augment the interface device ability to receive a spoken command, and to receive a tactile input command to identify the in-cabin vehicle zone. 6. The vehicle of claim 1, wherein the functional unit comprises a thermostat controller configured to control a temperature within the selected in-cabin vehicle zone. 7. The vehicle of claim 1, wherein the functional unit comprises a noise cancellation (NC) system configured to suppress sounds outside of the selected in-cabin vehicle zone. 8. The vehicle of claim 7, further comprising loudspeakers integrated as part of the NC system, the loudspeakers being configured to render amplified sounds within the selected in-cabin vehicle zone. 9. The vehicle of claim 1, further comprising a separate microphone array coupled to the functional unit, the separate microphone array being configured to perform audio beamforming within the selected in-cabin vehicle zone. 10. The vehicle of claim 9, wherein at least one microphone of the separate microphone array, is located in the two or more in-cabin vehicle zones, and wherein the at least one microphone is configured to capture the spoken command from one of the two or more in-cabin vehicle zones. 11. The vehicle of claim 10, wherein the at least one microphone of the separate microphone array comprises an error microphone configured to capture noise data for an active noise cancellation (ANC) system of the vehicle. 12. The vehicle of claim 10, wherein the at least one microphone of the separate microphone array comprises an error microphone configured to capture noise data for an active noise cancellation (ANC) system of the vehicle, the separate microphone array further comprising at least one zone microphone configured to capture one or more command inputs from a respective in-cabin zone of the two or more in-cabin zones that is associated with the zone microphone, the zone microphone being configured to capture directional information associated with the command inputs. 13. The vehicle of claim 1, further comprising a steering wheel positioned within a respective in-cabin vehicle zone of the two or more in-cabin vehicle zones, wherein the interface device is positioned in the respective in-cabin vehicle zone in which the steering wheel is positioned. 14. The vehicle of claim 13, wherein the in-cabin vehicle control unit is configured to select which in-cabin vehicle zone is identified by the spoken command originating in in-cabin vehicle zones of the two or more in-cabin vehicle zones that are positioned behind the respective in-cabin vehicle zone in which the steering wheel is positioned. 15. The vehicle of claim 1, the processing circuitry being further configured to suppress audio input received from respective microphones of all in-cabin vehicle zones other than the selected in-cabin vehicle zone. 16. The vehicle of claim 1, the processing circuitry being further configured to amplify audio input received from a respective microphone of the selected in-cabin vehicle zone. 17. The vehicle of claim 1, the processing circuitry being further configured to: identify respective voice information associated with audio input received from respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone; determine that any portion of the audio input that is received from a respective microphone of the selected in-cabin vehicle zone and is associated with the identified voice information received from the respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone comprises noise with respect to the selected in-cabin vehicle zone; apply, based on the determination, noise cancellation to the portion of the audio input that comprises the noise with respect to the selected in-cabin vehicle zone to obtain noise-cancelled audio input associated with the selected in-cabin vehicle zone; and control the functional unit using the noise-cancelled audio input associated with the selected in-cabin vehicle zone. 18. The vehicle of claim 1, wherein the background audio data comprises audio data associated with a phone call that occurs outside of the selected in-cabin vehicle zone, and wherein the processing circuitry is further configured to generate a response to the spoken command, the vehicle further comprising one or more parametric speakers configured to render the response as an auditory response. 19. A method comprising: receiving, at an interface device of a vehicle, a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; receiving, at the interface device, background audio data concurrently with a portion of the spoken command; separating, by an in-cabin vehicle control unit coupled to the interface device, the background audio data from the spoken command; selecting, by the in-cabin vehicle control unit, which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; storing, to a memory device, by processing circuitry coupled to the interface device, to the in-cabin vehicle control unit, and to a functional unit of the vehicle, data processed from the received spoken command; controlling, by the processing circuitry, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone; and controlling, by the functional unit of the vehicle, a function within the vehicle. 20. The method of claim 19, further comprising ceasing, by the processing circuitry, controlling the functional unit using audio input received from a deactivated in-cabin vehicle zone of the two or more in-cabin vehicle zones prior to controlling the functional unit using the audio input received from selected in-cabin vehicle zone of the two or more in-cabin vehicle zones, based on the receipt of the spoken command at the interface device. 21. The method of claim 19, further comprising: controlling, by the processing circuitry, the functional unit using the audio input received from the selected in-cabin vehicle zone prior to the receipt of the spoken command at the interface device; and continuing, by the processing circuitry, to control the functional unit using the audio input received from the selected in-cabin vehicle zone after receipt of the spoken command at the interface device. 22. The method of claim 19, further comprising: replacing, by a touchscreen integrated as part of the interface device, the interface device ability to receive a spoken command; and receiving, by the touchscreen, a tactile input command to identify the in-cabin vehicle zone. 23. The method of claim 19, further comprising: augmenting, by a touchscreen integrated as part of the interface device, the interface device ability to receive a spoken command; and receiving, by the touchscreen, a tactile input command to identify the in-cabin vehicle zone. 24. The method of claim 19, further comprising performing beamforming, by a separate microphone array coupled to the functional unit, within the selected in-cabin vehicle zone. 25. The method of claim 19, further comprising suppressing, by the processing circuitry, audio input received from respective microphones of all in-cabin vehicle zones other than the selected in-cabin vehicle zone. 26. The method of claim 19, further comprising amplifying audio input received from a respective microphone of the selected in-cabin vehicle zone. 27. The method of claim 19, further comprising: identifying, by the processing circuitry, respective voice information associated with audio input received from respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone; determining, by the processing circuitry, that any portion of the audio input that is received from a respective microphone of the selected in-cabin vehicle zone and is associated with the identified voice information received from the respective microphones of the in-cabin vehicle zones other than the selected in-cabin vehicle zone comprises noise with respect to the selected in-cabin vehicle zone; applying, by the processing circuitry, noise cancellation to the portion of the audio input that comprises the noise with respect to the selected in-cabin vehicle zone to obtain noise-cancelled audio input associated with the selected in-cabin vehicle zone based on the determination; and controlling, by the processing circuitry, the functional unit using the noise-cancelled audio input associated with the selected in-cabin vehicle zone. 28. The method of claim 19, wherein the background audio data comprises audio data associated with a phone call that occurs outside of the selected in-cabin vehicle zone, the method further comprising: generating, by the processing circuitry, a response to the spoken command; and rendering, via one or more parametric speakers, the response as an auditory response. 29. An apparatus comprising: means for receiving, via an interface device, a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; means for receiving, via the interface device, background audio data concurrently with a portion of the spoken command; means for separating the background audio data from the spoken command; means for selecting which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; means for storing, to a memory device, data processed from the received spoken command; and means for controlling, based on the data processed from the received spoken command, a functional unit using audio input received from the selected in-cabin vehicle zone. 30. A computer-readable storage medium encoded with instructions that, when executed, cause processing circuitry of a vehicle to: receive, via an interface device of the vehicle, a spoken command to identify an in-cabin vehicle zone of two or more in-cabin vehicle zones of the vehicle; receive, via the interface device, background audio data concurrently with a portion of the spoken command; separate the background audio data from the spoken command; select which in-cabin vehicle zone of the two or more in-cabin vehicle zones is identified by the spoken command; store data processed from the received spoken command; and control, based on the data processed from the received spoken command, the functional unit using audio input received from the selected in-cabin vehicle zone; and cause the functional unit of the vehicle to a function within the vehicle.
2,600
10,852
10,852
15,556,851
2,649
The present invention relates to the communications field, provides a region prompt method and a terminal, so as to display an NFC antenna region of the terminal, thereby reducing a difficulty of NFC matching between terminals, and also reducing a difficulty of performing NFC between the terminals. The method includes: obtaining region information of an NFC antenna when it is determined that a terminal needs to perform near field communication NFC; and displaying an NFC antenna region corresponding to the region information on a screen of the terminal.
1. A region prompt method, applied to a terminal, comprising: obtaining region information of a near field communication (NFC) antenna; and displaying, when it is determined that the terminal needs to perform NFC, an NFC antenna region corresponding to the region information on a screen of the terminal. 2. The method according to claim 1, wherein the determining that the terminal needs to perform NFC specifically comprises: determining that the terminal needs to perform NFC when an application of the terminal is in an operating state, the application calls an NFC application programming interface (API) of the terminal, and an NFC function of the terminal is enabled; or determining that the terminal needs to perform NFC when an NFC function of the terminal is enabled, and a voltage of the NFC antenna deviates from a preset value. 3. The method according to claim 1, wherein the obtaining region information of an NFC antenna specifically comprises: querying whether there is the region information in a local storage system of the terminal; and if there is the region information, obtaining the region information from the local storage system of the terminal; or if there is not the region information, querying system information of the terminal, and obtaining a model number of the terminal; and querying a cloud server according to the model number, and obtaining NFC antenna region information corresponding to the model number as the region information. 4. The method according to claim 1, wherein after the displaying a region corresponding to the NFC antenna on a screen of the terminal, the method further comprises: displaying a center point of the NFC antenna region. 5. The method according to claim 1, wherein the method further comprises: if the NFC antenna region corresponding to the region information goes beyond the screen of the terminal, displaying a direction of the NFC antenna region corresponding to the region information on the screen of the terminal. 6. The method according to claim 1, wherein the method further comprises: when it is detected that the NFC function of the terminal is disabled, removing the NFC antenna region displayed on the screen of the terminal; or when it is detected that the application stops calling an NFC API of the terminal, and that other applications except the application in the terminal do not call the NFC API of the terminal, removing the NFC antenna region displayed on the screen of the terminal; or when it is detected that a voltage of the NFC antenna is the preset value, and does not change within a preset time, removing the NFC antenna region displayed on the screen of the terminal. 7. A terminal, comprising: an obtaining unit, configured to obtain region information of a near field communication (NFC) antenna; a determining unit, configured to determine whether the terminal needs to perform NFC; and a display control unit, configured to display, when the determining unit determines that the terminal needs to perform NFC, an NFC antenna region corresponding to the region information on the screen of the terminal. 8. The terminal according to claim 7, wherein the determining unit is specifically configured to determine that the terminal needs to perform NFC when an application of the terminal is in an operating state, the application calls an NFC application programming interface (API) of the terminal, and an NFC function of the terminal is enabled; or determine that the terminal needs to perform NFC when an NFC function of the terminal is enabled, and a voltage of the NFC antenna deviates from a preset value. 9. The terminal according to claim 7, wherein the obtaining unit is specifically configured to query whether there is the region information in a local storage system of the terminal; and if there is the region information, obtain the region information from the local storage system of the terminal; or if there is not the region information, query system information of the terminal, and obtain a model number of the terminal; and query a cloud server according to the model number, and obtain NFC antenna region information corresponding to the model number as the region information. 10. The terminal according to claim 7, wherein the display control unit is further configured to display a center point of the NFC antenna region after displaying the region corresponding to the NFC antenna on the screen of the terminal. 11. The terminal according to claim 7, wherein the display control unit is further configured to: if the NFC antenna region corresponding to the region information goes beyond the screen of the terminal, display a direction of the NFC antenna region corresponding to the region information on the screen of the terminal. 12. The terminal according to claim 7, wherein the display control unit is further configured to: when it is detected that an NFC function of the terminal is disabled, remove the NFC antenna region displayed on the screen of the terminal; or when it is detected that the application stops calling an NFC API of the terminal, and that other applications except the application in the terminal do not call the NFC API of the terminal, remove the NFC antenna region displayed on the screen of the terminal; or when it is detected that a voltage of the NFC antenna is the preset value, and does not change within a preset time, remove the NFC antenna region displayed on the screen of the terminal. 13. A terminal, comprising: a processor, configured to obtain region information of a near field communication (NFC) antenna; and determine whether the terminal needs to perform NFC, wherein the processor is further configured to: when the terminal needs to perform NFC, send a command to a screen of the terminal, so as to display an NFC antenna region corresponding to the region information on the screen. 14. The terminal according to claim 13, wherein the determining whether the terminal needs to perform NFC specifically comprises: determining that the terminal needs to perform NFC when an application of the terminal is in an operating state, the application calls an NFC application programming interface (API) of the terminal, and an NFC function of the terminal is enabled; or determining that the terminal needs to perform NFC when an NFC function of the terminal is enabled, and a voltage of the NFC antenna deviates from a preset value. 15. The terminal according to claim 13, wherein the obtaining region information of an NFC antenna specifically comprises: querying whether there is the region information in a local storage system of the terminal; and if there is the region information, obtaining the region information from the local storage system of the terminal; or if there is not the region information, querying system information of the terminal, and obtaining a model number of the terminal; and querying a cloud server according to the model number, and obtaining NFC antenna region information corresponding to the model number as the region information. 16. The terminal according to claim 13, wherein the processor is further configured to send a command to the screen after displaying the region corresponding to the NFC antenna on the screen of the terminal, so as to display a center point of the NFC antenna region on the screen. 17. The terminal according to claim 13, wherein the processor is further configured to: send a command to the screen when the NFC antenna region corresponding to the region information goes beyond the screen of the terminal, so as to display a direction of the NFC antenna region corresponding to the region information on the screen. 18. The terminal according to claim 13, wherein the processor is further configured to: send a command to the screen when it is detected that an NFC function of the terminal is disabled, so as to remove the NFC antenna region displayed on the screen; or send a command to the screen when it is detected that the application stops calling an NFC API of the terminal, and that other applications except the application in the terminal do not call the NFC API of the terminal, so as to remove the NFC antenna region displayed on the screen; or send a command to the screen when it is detected that a voltage of the NFC antenna is the preset value, and does not change within a preset time, so as to remove the NFC antenna region displayed on the screen. 19. The terminal according to claim 14, wherein the obtaining region information of an NFC antenna specifically comprises: querying whether there is the region information in a local storage system of the terminal; and if there is the region information, obtaining the region information from the local storage system of the terminal; or if there is not the region information, querying system information of the terminal, and obtaining a model number of the terminal; and querying a cloud server according to the model number, and obtaining NFC antenna region information corresponding to the model number as the region information. 20. The terminal according to claim 14, wherein the processor is further configured to send a command to the screen after displaying the region corresponding to the NFC antenna on the screen of the terminal, so as to display a center point of the NFC antenna region on the screen.
The present invention relates to the communications field, provides a region prompt method and a terminal, so as to display an NFC antenna region of the terminal, thereby reducing a difficulty of NFC matching between terminals, and also reducing a difficulty of performing NFC between the terminals. The method includes: obtaining region information of an NFC antenna when it is determined that a terminal needs to perform near field communication NFC; and displaying an NFC antenna region corresponding to the region information on a screen of the terminal.1. A region prompt method, applied to a terminal, comprising: obtaining region information of a near field communication (NFC) antenna; and displaying, when it is determined that the terminal needs to perform NFC, an NFC antenna region corresponding to the region information on a screen of the terminal. 2. The method according to claim 1, wherein the determining that the terminal needs to perform NFC specifically comprises: determining that the terminal needs to perform NFC when an application of the terminal is in an operating state, the application calls an NFC application programming interface (API) of the terminal, and an NFC function of the terminal is enabled; or determining that the terminal needs to perform NFC when an NFC function of the terminal is enabled, and a voltage of the NFC antenna deviates from a preset value. 3. The method according to claim 1, wherein the obtaining region information of an NFC antenna specifically comprises: querying whether there is the region information in a local storage system of the terminal; and if there is the region information, obtaining the region information from the local storage system of the terminal; or if there is not the region information, querying system information of the terminal, and obtaining a model number of the terminal; and querying a cloud server according to the model number, and obtaining NFC antenna region information corresponding to the model number as the region information. 4. The method according to claim 1, wherein after the displaying a region corresponding to the NFC antenna on a screen of the terminal, the method further comprises: displaying a center point of the NFC antenna region. 5. The method according to claim 1, wherein the method further comprises: if the NFC antenna region corresponding to the region information goes beyond the screen of the terminal, displaying a direction of the NFC antenna region corresponding to the region information on the screen of the terminal. 6. The method according to claim 1, wherein the method further comprises: when it is detected that the NFC function of the terminal is disabled, removing the NFC antenna region displayed on the screen of the terminal; or when it is detected that the application stops calling an NFC API of the terminal, and that other applications except the application in the terminal do not call the NFC API of the terminal, removing the NFC antenna region displayed on the screen of the terminal; or when it is detected that a voltage of the NFC antenna is the preset value, and does not change within a preset time, removing the NFC antenna region displayed on the screen of the terminal. 7. A terminal, comprising: an obtaining unit, configured to obtain region information of a near field communication (NFC) antenna; a determining unit, configured to determine whether the terminal needs to perform NFC; and a display control unit, configured to display, when the determining unit determines that the terminal needs to perform NFC, an NFC antenna region corresponding to the region information on the screen of the terminal. 8. The terminal according to claim 7, wherein the determining unit is specifically configured to determine that the terminal needs to perform NFC when an application of the terminal is in an operating state, the application calls an NFC application programming interface (API) of the terminal, and an NFC function of the terminal is enabled; or determine that the terminal needs to perform NFC when an NFC function of the terminal is enabled, and a voltage of the NFC antenna deviates from a preset value. 9. The terminal according to claim 7, wherein the obtaining unit is specifically configured to query whether there is the region information in a local storage system of the terminal; and if there is the region information, obtain the region information from the local storage system of the terminal; or if there is not the region information, query system information of the terminal, and obtain a model number of the terminal; and query a cloud server according to the model number, and obtain NFC antenna region information corresponding to the model number as the region information. 10. The terminal according to claim 7, wherein the display control unit is further configured to display a center point of the NFC antenna region after displaying the region corresponding to the NFC antenna on the screen of the terminal. 11. The terminal according to claim 7, wherein the display control unit is further configured to: if the NFC antenna region corresponding to the region information goes beyond the screen of the terminal, display a direction of the NFC antenna region corresponding to the region information on the screen of the terminal. 12. The terminal according to claim 7, wherein the display control unit is further configured to: when it is detected that an NFC function of the terminal is disabled, remove the NFC antenna region displayed on the screen of the terminal; or when it is detected that the application stops calling an NFC API of the terminal, and that other applications except the application in the terminal do not call the NFC API of the terminal, remove the NFC antenna region displayed on the screen of the terminal; or when it is detected that a voltage of the NFC antenna is the preset value, and does not change within a preset time, remove the NFC antenna region displayed on the screen of the terminal. 13. A terminal, comprising: a processor, configured to obtain region information of a near field communication (NFC) antenna; and determine whether the terminal needs to perform NFC, wherein the processor is further configured to: when the terminal needs to perform NFC, send a command to a screen of the terminal, so as to display an NFC antenna region corresponding to the region information on the screen. 14. The terminal according to claim 13, wherein the determining whether the terminal needs to perform NFC specifically comprises: determining that the terminal needs to perform NFC when an application of the terminal is in an operating state, the application calls an NFC application programming interface (API) of the terminal, and an NFC function of the terminal is enabled; or determining that the terminal needs to perform NFC when an NFC function of the terminal is enabled, and a voltage of the NFC antenna deviates from a preset value. 15. The terminal according to claim 13, wherein the obtaining region information of an NFC antenna specifically comprises: querying whether there is the region information in a local storage system of the terminal; and if there is the region information, obtaining the region information from the local storage system of the terminal; or if there is not the region information, querying system information of the terminal, and obtaining a model number of the terminal; and querying a cloud server according to the model number, and obtaining NFC antenna region information corresponding to the model number as the region information. 16. The terminal according to claim 13, wherein the processor is further configured to send a command to the screen after displaying the region corresponding to the NFC antenna on the screen of the terminal, so as to display a center point of the NFC antenna region on the screen. 17. The terminal according to claim 13, wherein the processor is further configured to: send a command to the screen when the NFC antenna region corresponding to the region information goes beyond the screen of the terminal, so as to display a direction of the NFC antenna region corresponding to the region information on the screen. 18. The terminal according to claim 13, wherein the processor is further configured to: send a command to the screen when it is detected that an NFC function of the terminal is disabled, so as to remove the NFC antenna region displayed on the screen; or send a command to the screen when it is detected that the application stops calling an NFC API of the terminal, and that other applications except the application in the terminal do not call the NFC API of the terminal, so as to remove the NFC antenna region displayed on the screen; or send a command to the screen when it is detected that a voltage of the NFC antenna is the preset value, and does not change within a preset time, so as to remove the NFC antenna region displayed on the screen. 19. The terminal according to claim 14, wherein the obtaining region information of an NFC antenna specifically comprises: querying whether there is the region information in a local storage system of the terminal; and if there is the region information, obtaining the region information from the local storage system of the terminal; or if there is not the region information, querying system information of the terminal, and obtaining a model number of the terminal; and querying a cloud server according to the model number, and obtaining NFC antenna region information corresponding to the model number as the region information. 20. The terminal according to claim 14, wherein the processor is further configured to send a command to the screen after displaying the region corresponding to the NFC antenna on the screen of the terminal, so as to display a center point of the NFC antenna region on the screen.
2,600
10,853
10,853
15,961,697
2,664
A method includes triggering a camera to capture a plurality of images of a moving object while the object is in a field of view of the camera. At least a portion of a flight path of the moving object may be in the field of view of the camera. The flight path of the moving object may include an initial point of the moving object and a batting area. The method further includes extrapolating one or more flight characteristics of the moving object using the plurality of images to generate an extrapolation of the one or more flight characteristics of the moving object. The method also includes measuring a speed of the moving object using a radar device. The method includes verifying the one or more flight characteristics of the moving object based on the speed of the moving object.
1. A method, comprising: triggering a camera to capture a plurality of images of a moving object while the object is in a field of view of the camera, wherein at least a portion of a flight path of the moving object is in the field of view of the camera, wherein the flight path of the moving object includes an initial point of the moving object and a batting area; using the plurality of images, extrapolating one or more flight characteristics of the moving object to generate an extrapolation of the one or more flight characteristics of the moving object; measuring a speed of the moving object using a radar device; and verifying the one or more flight characteristics of the moving object based on the speed of the moving object. 2. The method of claim 1, wherein the flight characteristics include speed, velocity, rotation, axis of rotation, speed of rotation, vertical angle of elevation, azimuth angle, trajectory, and release angle. 3. The method of claim 2, wherein verifying the one or more flight characteristics of the moving object based on the speed of the moving object comprises: verifying the one or more of the flight characteristics by comparing the extrapolation of the one or more flight characteristics of the moving object with the speed of the moving object; and modifying the one or more of the flight characteristics in response to a determination that the extrapolation of the one or more flight characteristics of the moving object and the speed of the moving object are different. 4. The method of claim 1, wherein the moving object includes a baseball and the initial point of the moving object includes a pitcher's mound. 5. The method of claim 4, further comprising characterizing a pitch of the baseball as one of a fastball, a curve ball, a breaking ball, or a slider based on a direction of a rotation and a release angle of the baseball. 6. The method of claim 4, further comprising characterizing a pitch of the baseball as a ball or a strike based on a comparison of flight characteristics of the pitch of the baseball with a set of reference flight characteristics associated with a set of reference pitches. 7. The method of claim 4, wherein the camera is a single camera located behind home plate. 8. The method of claim 7, wherein the moving object entering the field of view of the single camera triggers the capturing of the plurality of images. 9. A system comprising: a camera; a memory; and a processor communicatively coupled to the memory and configured to read instructions stored on the memory causing the processor to perform operations, the operations comprising: trigger a camera to capture a plurality of images of a moving object while the object is in a field of view of the camera, wherein at least a portion of a flight path of the moving object is in the field of view of the camera, wherein the flight path of the moving object includes an initial point of the moving object and a batting area; using the plurality of images, extrapolate one or more flight characteristics of the moving object to generate an extrapolation of the one or more flight characteristics of the moving object; receive a speed of the moving object from a radar device; and verify the one or more flight characteristics of the moving object based on the speed of the moving object. 10. The system of claim 9, wherein the flight characteristics include speed, velocity, rotation, axis of rotation, speed of rotation, vertical angle of elevation, azimuth angle, trajectory, and release angle. 11. The system of claim 10, wherein verifying the one or more flight characteristics of the moving object based on the speed of the moving object comprises: verifying the one or more of the flight characteristics by comparing the extrapolation of the one or more flight characteristics of the moving object with the speed of the moving object; and modifying the one or more of the flight characteristics in response to a determination that the extrapolation of the one or more flight characteristics of the moving object and the speed of the moving object are different. 12. The system of claim 9, wherein the moving object includes a baseball and the initial point of the moving object includes a pitcher's mound. 13. The system of claim 12, the operations further comprising characterizing a pitch of the baseball as one of a fastball, a curve ball, a breaking ball, or a slider based on a direction of a rotation and a release angle of the baseball. 14. The system of claim 12, the operations further comprising characterizing a pitch of the baseball as a ball or a strike based on a comparison of flight characteristics of the pitch of the baseball with a set of reference flight characteristics associated with a set of reference pitches. 15. The system of claim 9, wherein the camera is located proximate to an end of the flight path. 16. The system of claim 15, wherein the moving object entering the field of view of the camera triggers the capturing of the plurality of images. 17. A method, comprising: triggering at least one camera to capture a plurality of images of a moving object while the moving object is in a field of view of the at least one camera and between an initial point of the moving object and a ball travel termination area; extrapolating one or more flight characteristics of the moving object based on the plurality of images; measuring a speed of the moving object using a radar device; and verifying the one or more flight characteristics of the moving object based on the speed measured by the radar device. 18. The method of claim 17, further comprising revising the one or more flight characteristics of the moving object based on the speed measured using the radar device. 19. The method of claim 17, wherein the one or more flight characteristics includes an extrapolated speed of the moving object. 20. The method of claim 17, wherein the moving object includes a baseball and the initial point of the moving object includes a pitcher's mound.
A method includes triggering a camera to capture a plurality of images of a moving object while the object is in a field of view of the camera. At least a portion of a flight path of the moving object may be in the field of view of the camera. The flight path of the moving object may include an initial point of the moving object and a batting area. The method further includes extrapolating one or more flight characteristics of the moving object using the plurality of images to generate an extrapolation of the one or more flight characteristics of the moving object. The method also includes measuring a speed of the moving object using a radar device. The method includes verifying the one or more flight characteristics of the moving object based on the speed of the moving object.1. A method, comprising: triggering a camera to capture a plurality of images of a moving object while the object is in a field of view of the camera, wherein at least a portion of a flight path of the moving object is in the field of view of the camera, wherein the flight path of the moving object includes an initial point of the moving object and a batting area; using the plurality of images, extrapolating one or more flight characteristics of the moving object to generate an extrapolation of the one or more flight characteristics of the moving object; measuring a speed of the moving object using a radar device; and verifying the one or more flight characteristics of the moving object based on the speed of the moving object. 2. The method of claim 1, wherein the flight characteristics include speed, velocity, rotation, axis of rotation, speed of rotation, vertical angle of elevation, azimuth angle, trajectory, and release angle. 3. The method of claim 2, wherein verifying the one or more flight characteristics of the moving object based on the speed of the moving object comprises: verifying the one or more of the flight characteristics by comparing the extrapolation of the one or more flight characteristics of the moving object with the speed of the moving object; and modifying the one or more of the flight characteristics in response to a determination that the extrapolation of the one or more flight characteristics of the moving object and the speed of the moving object are different. 4. The method of claim 1, wherein the moving object includes a baseball and the initial point of the moving object includes a pitcher's mound. 5. The method of claim 4, further comprising characterizing a pitch of the baseball as one of a fastball, a curve ball, a breaking ball, or a slider based on a direction of a rotation and a release angle of the baseball. 6. The method of claim 4, further comprising characterizing a pitch of the baseball as a ball or a strike based on a comparison of flight characteristics of the pitch of the baseball with a set of reference flight characteristics associated with a set of reference pitches. 7. The method of claim 4, wherein the camera is a single camera located behind home plate. 8. The method of claim 7, wherein the moving object entering the field of view of the single camera triggers the capturing of the plurality of images. 9. A system comprising: a camera; a memory; and a processor communicatively coupled to the memory and configured to read instructions stored on the memory causing the processor to perform operations, the operations comprising: trigger a camera to capture a plurality of images of a moving object while the object is in a field of view of the camera, wherein at least a portion of a flight path of the moving object is in the field of view of the camera, wherein the flight path of the moving object includes an initial point of the moving object and a batting area; using the plurality of images, extrapolate one or more flight characteristics of the moving object to generate an extrapolation of the one or more flight characteristics of the moving object; receive a speed of the moving object from a radar device; and verify the one or more flight characteristics of the moving object based on the speed of the moving object. 10. The system of claim 9, wherein the flight characteristics include speed, velocity, rotation, axis of rotation, speed of rotation, vertical angle of elevation, azimuth angle, trajectory, and release angle. 11. The system of claim 10, wherein verifying the one or more flight characteristics of the moving object based on the speed of the moving object comprises: verifying the one or more of the flight characteristics by comparing the extrapolation of the one or more flight characteristics of the moving object with the speed of the moving object; and modifying the one or more of the flight characteristics in response to a determination that the extrapolation of the one or more flight characteristics of the moving object and the speed of the moving object are different. 12. The system of claim 9, wherein the moving object includes a baseball and the initial point of the moving object includes a pitcher's mound. 13. The system of claim 12, the operations further comprising characterizing a pitch of the baseball as one of a fastball, a curve ball, a breaking ball, or a slider based on a direction of a rotation and a release angle of the baseball. 14. The system of claim 12, the operations further comprising characterizing a pitch of the baseball as a ball or a strike based on a comparison of flight characteristics of the pitch of the baseball with a set of reference flight characteristics associated with a set of reference pitches. 15. The system of claim 9, wherein the camera is located proximate to an end of the flight path. 16. The system of claim 15, wherein the moving object entering the field of view of the camera triggers the capturing of the plurality of images. 17. A method, comprising: triggering at least one camera to capture a plurality of images of a moving object while the moving object is in a field of view of the at least one camera and between an initial point of the moving object and a ball travel termination area; extrapolating one or more flight characteristics of the moving object based on the plurality of images; measuring a speed of the moving object using a radar device; and verifying the one or more flight characteristics of the moving object based on the speed measured by the radar device. 18. The method of claim 17, further comprising revising the one or more flight characteristics of the moving object based on the speed measured using the radar device. 19. The method of claim 17, wherein the one or more flight characteristics includes an extrapolated speed of the moving object. 20. The method of claim 17, wherein the moving object includes a baseball and the initial point of the moving object includes a pitcher's mound.
2,600
10,854
10,854
15,714,832
2,664
A method and apparatus for verifying a pattern by mapping a grid onto a template that defines a plurality of grid sections. A digital image may be taken of a container showing an identifying mark (or lack thereof). The template and/or grid may be scaled and/or translated to match a perspective of the identifying mark as depicted in the digital image. A correlation function compares individual grid sections to portions of the digital image to find portions that maximize a correlation score. Grid sections with low contrast may be skipped and/or combined with higher-contrast sections to compute a score. If a match condition is satisfied between at least some of the grid sections and the digital image of the identifying mark, the pattern has been verified.
1. A method of verifying a pattern, the method comprising: mapping a grid onto a template, the grid dividing the template into a plurality of grid sections; capturing a digital image of an object; applying a correlation function to at least some of the plurality of grid sections, the correlation function comparing one of the plurality of grid sections to a respective portion of the digital image of the object, the correlation function further outputting a correlation score of each of the at least some of the plurality of grid sections; determining a correlation level between the template and the digital image of the object, the correlation level being based at least in part on the correlation score of each of the at least some of the plurality of grid sections; and verifying the object if the correlation level satisfies a match condition. 2. The method of claim 1, wherein the applying operation includes repeatedly applying the correlation function to different respective portions of the digital image of the object to find a best match portion of the digital image of the object. 3. The method of claim 1, further comprising one of: scaling the digital image of the object to a size of the template and scaling the template to a size of the digital image. 4. The method of claim 1, wherein at least some of the plurality of grid sections are overlapping. 5. The method of claim 1, wherein the correlation function includes the function: C  ( f , g ) = ∑ ( f - f _ )  ( g - g _ ) ∑ ( f - f _ ) 2   ∑ ( g - g _ ) 2 wherein g is a 2D grid section of the template, f is a 2D section of the digital image with the same dimension as g, and the sum operations are performed over all members of g. 6. The method of claim 1, wherein the applying operation includes determining a correlation score of a second grid section in the plurality of grid sections after a correlation score of a first grid section has been determined, the correlation score of the first grid section satisfying a bootstrapping condition and the first and second grid sections being adjacent. 7. The method of claim 1, further comprising: evaluating a contrast level of each of the plurality of grid sections in the template; and decreasing a number of the plurality of grid sections if the contrast level satisfies a low-contrast condition. 8. A system for verifying a pattern, the system comprising: a camera positioned to capture an image of an object; a comparator circuit configured to apply a correlation function to at least some of a plurality of grid sections dividing a template, the correlation function comparing one of the plurality of grid sections to a respective portion of the image of the object, the correlation function further outputting a correlation score of each of the at least some of the plurality of grid sections; and the comparator circuit further being configured to determine a correlation level between the template and the image of the object, the comparator circuit further configured to verify the object if the correlation level satisfies a match condition. 9. The system of claim 8, wherein the comparator circuit is further configured to: skip applying the correlation function, in a first correlation pass, to one of the plurality of grid sections if the one of the plurality of grid sections satisfies a low-contrast condition; combine, in a second correlation pass, the one of the plurality of grid sections with an adjacent grid section into a combined grid section; and apply, in the second correlation pass, a second correlation function to the combined grid section, the second correlation function comparing the combined grid section to another portion of the image of the object. 10. The system of claim 8, wherein applying, the correlation function includes a beginning grid section, the beginning grid section being located substantially near the center of the template, and the correlation level of the beginning grid section being determined before other grid sections in the plurality of grid sections. 11. The system of claim 8, wherein the plurality of grid sections are positionally shifted if some of the plurality of grid sections satisfy a low-contrast condition. 12. The system of claim 8, wherein the correlation score is based at least in part on a ratio of matching pixels to non-matching pixels between the one of the plurality of grid sections and the respective portion of the image. 13. The system of claim 8, The method of claim 1, wherein the correlation function includes the function: C  ( f , g ) = ∑ ( f - f _ )  ( g - g _ ) ∑ ( f - f _ ) 2  ∑ ( g - g _ ) 2 wherein g is a 2D grid section of the template, f is a 2D section of the digital image with the same dimension as g, and the sum operations are performed over all members of g. 14. The system of claim 8, wherein the comparator circuit is configured to determine the correlation score by repeatedly applying the correlation function to different portions of the image until a best match is found. 15. The system of claim 14, wherein the different portions of the image are chosen based on a rotation of a grid section with respect to the image of the object. 16. A system for verifying the provenance of a consumable good, the system comprising: a receptacle for insertion of a consumable good, the consumable good having a container; a light source positioned to illuminate the container when the consumable good is inserted into the receptacle; a camera positioned to capture an image of at least a portion of the container when the consumable good is inserted into the receptacle; a comparator circuit configured to determine a correlation level between the image and a template, the template being divided into a plurality of grid sections; and a control circuit configured to reject the consumable good if the correlation level does not satisfy a match condition. 17. The system of claim 16, wherein the consumable good includes an input to a beverage. 18. The system of claim 16, wherein rejecting the consumable good includes displaying a message that the system accepts only consumable goods of a certain provenance. 19. The system of claim 16, wherein the correlation level between the image of the container and the template is determined at least in part by adjusting the position of some of the plurality of grid sections with respect to the image of the container. 20. The system of claim 16, wherein the correlation level between the image of the container and the template is determined by pushing grid sections adjacent to a matched grid section onto a stack and determining a correlation level for subsequent grid sections according to a position of the subsequent grid sections in the stack.
A method and apparatus for verifying a pattern by mapping a grid onto a template that defines a plurality of grid sections. A digital image may be taken of a container showing an identifying mark (or lack thereof). The template and/or grid may be scaled and/or translated to match a perspective of the identifying mark as depicted in the digital image. A correlation function compares individual grid sections to portions of the digital image to find portions that maximize a correlation score. Grid sections with low contrast may be skipped and/or combined with higher-contrast sections to compute a score. If a match condition is satisfied between at least some of the grid sections and the digital image of the identifying mark, the pattern has been verified.1. A method of verifying a pattern, the method comprising: mapping a grid onto a template, the grid dividing the template into a plurality of grid sections; capturing a digital image of an object; applying a correlation function to at least some of the plurality of grid sections, the correlation function comparing one of the plurality of grid sections to a respective portion of the digital image of the object, the correlation function further outputting a correlation score of each of the at least some of the plurality of grid sections; determining a correlation level between the template and the digital image of the object, the correlation level being based at least in part on the correlation score of each of the at least some of the plurality of grid sections; and verifying the object if the correlation level satisfies a match condition. 2. The method of claim 1, wherein the applying operation includes repeatedly applying the correlation function to different respective portions of the digital image of the object to find a best match portion of the digital image of the object. 3. The method of claim 1, further comprising one of: scaling the digital image of the object to a size of the template and scaling the template to a size of the digital image. 4. The method of claim 1, wherein at least some of the plurality of grid sections are overlapping. 5. The method of claim 1, wherein the correlation function includes the function: C  ( f , g ) = ∑ ( f - f _ )  ( g - g _ ) ∑ ( f - f _ ) 2   ∑ ( g - g _ ) 2 wherein g is a 2D grid section of the template, f is a 2D section of the digital image with the same dimension as g, and the sum operations are performed over all members of g. 6. The method of claim 1, wherein the applying operation includes determining a correlation score of a second grid section in the plurality of grid sections after a correlation score of a first grid section has been determined, the correlation score of the first grid section satisfying a bootstrapping condition and the first and second grid sections being adjacent. 7. The method of claim 1, further comprising: evaluating a contrast level of each of the plurality of grid sections in the template; and decreasing a number of the plurality of grid sections if the contrast level satisfies a low-contrast condition. 8. A system for verifying a pattern, the system comprising: a camera positioned to capture an image of an object; a comparator circuit configured to apply a correlation function to at least some of a plurality of grid sections dividing a template, the correlation function comparing one of the plurality of grid sections to a respective portion of the image of the object, the correlation function further outputting a correlation score of each of the at least some of the plurality of grid sections; and the comparator circuit further being configured to determine a correlation level between the template and the image of the object, the comparator circuit further configured to verify the object if the correlation level satisfies a match condition. 9. The system of claim 8, wherein the comparator circuit is further configured to: skip applying the correlation function, in a first correlation pass, to one of the plurality of grid sections if the one of the plurality of grid sections satisfies a low-contrast condition; combine, in a second correlation pass, the one of the plurality of grid sections with an adjacent grid section into a combined grid section; and apply, in the second correlation pass, a second correlation function to the combined grid section, the second correlation function comparing the combined grid section to another portion of the image of the object. 10. The system of claim 8, wherein applying, the correlation function includes a beginning grid section, the beginning grid section being located substantially near the center of the template, and the correlation level of the beginning grid section being determined before other grid sections in the plurality of grid sections. 11. The system of claim 8, wherein the plurality of grid sections are positionally shifted if some of the plurality of grid sections satisfy a low-contrast condition. 12. The system of claim 8, wherein the correlation score is based at least in part on a ratio of matching pixels to non-matching pixels between the one of the plurality of grid sections and the respective portion of the image. 13. The system of claim 8, The method of claim 1, wherein the correlation function includes the function: C  ( f , g ) = ∑ ( f - f _ )  ( g - g _ ) ∑ ( f - f _ ) 2  ∑ ( g - g _ ) 2 wherein g is a 2D grid section of the template, f is a 2D section of the digital image with the same dimension as g, and the sum operations are performed over all members of g. 14. The system of claim 8, wherein the comparator circuit is configured to determine the correlation score by repeatedly applying the correlation function to different portions of the image until a best match is found. 15. The system of claim 14, wherein the different portions of the image are chosen based on a rotation of a grid section with respect to the image of the object. 16. A system for verifying the provenance of a consumable good, the system comprising: a receptacle for insertion of a consumable good, the consumable good having a container; a light source positioned to illuminate the container when the consumable good is inserted into the receptacle; a camera positioned to capture an image of at least a portion of the container when the consumable good is inserted into the receptacle; a comparator circuit configured to determine a correlation level between the image and a template, the template being divided into a plurality of grid sections; and a control circuit configured to reject the consumable good if the correlation level does not satisfy a match condition. 17. The system of claim 16, wherein the consumable good includes an input to a beverage. 18. The system of claim 16, wherein rejecting the consumable good includes displaying a message that the system accepts only consumable goods of a certain provenance. 19. The system of claim 16, wherein the correlation level between the image of the container and the template is determined at least in part by adjusting the position of some of the plurality of grid sections with respect to the image of the container. 20. The system of claim 16, wherein the correlation level between the image of the container and the template is determined by pushing grid sections adjacent to a matched grid section onto a stack and determining a correlation level for subsequent grid sections according to a position of the subsequent grid sections in the stack.
2,600
10,855
10,855
15,474,573
2,692
An object of the present invention is to provide a driver IC and a liquid crystal display apparatus which uses a circuit of an output channel which is not used to drive a liquid crystal panel as a backup of the other output channel. A driver IC includes a plurality of output channels ch1 to chn, a plurality of output buffer circuits corresponding to each of the plurality of output channels ch1 to chn, and an output channel selection circuit, and when a malfunction occurs in the output buffer circuit of an effective channel, the output buffer circuit in which the malfunction occurs is automatically switched to the output buffer circuit of an ineffective channel so that the output of the signal from the effective channel is continued.
1. A driver IC used to drive a liquid crystal display panel, comprising: a plurality of output channels outputting signals to each of a plurality of row wirings or plurality of column wirings in said liquid crystal display panel; a plurality of output buffer circuits corresponding to each of said plurality of output channels; and an output channel selection circuit selecting an output channel used to output a signal from said plurality of output channels in accordance with a preset number of channels, wherein said plurality of output channels include an effective channel selected by said output channel selection circuit and an ineffective channel other than said effective channel, and when a malfunction occurs in one of said output buffer circuits of said effective channel, said output buffer circuit in which said malfunction occurs is automatically switched to said output buffer circuit of said ineffective channel so that an output of a signal from said effective channel is continued. 2. The driver IC according to claim 1, further comprising: a malfunction detection circuit detecting a malfunction of said output buffer circuit; and a selector circuit, wherein when said malfunction detection circuit detects a malfunction of said output buffer circuit of said effective channel, said selector circuit switches said output buffer circuit in which said malfunction is detected to said output buffer circuit of said ineffective channel. 3. The driver IC according to claim 2, wherein said malfunction detection circuit detects a malfunction of said output buffer circuit based on a current consumed by said output buffer circuit. 4. The driver IC according to claim 2, wherein said malfunction detection circuit detects a malfunction of said output buffer circuit based on a voltage level of a signal being output by said output buffer circuit. 5. The driver IC according to claim 2, wherein said malfunction detection circuit detects a malfunction of said output buffer circuit based on a cycle of a signal being output by said output buffer circuit. 6. The driver IC according to claim 2, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 7. The driver IC according to claim 3, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 8. The driver IC according to claim 4, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 9. The driver IC according to claim 5, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 10. A liquid crystal display apparatus, comprising: a driver IC according to claim 1; and a liquid crystal display panel driven by said driver IC. 11. A liquid crystal display apparatus, comprising: a plurality of driver ICs according to claim 1; and a liquid crystal display panel driven by said driver IC, wherein one of said plurality of driver ICs is set to a master mode, and other driver ICs are set to a slave mode, said driver ICs which are set to said slave mode operate in accordance with a control signal generated in said driver IC which is set to said master mode, and in each of said plurality of driver ICs, an operation of switching said output buffer circuit in which a malfunction is detected to said output buffer circuit of said ineffective channel is performed regardless of whether said driver IC is set to said master mode or said slave mode.
An object of the present invention is to provide a driver IC and a liquid crystal display apparatus which uses a circuit of an output channel which is not used to drive a liquid crystal panel as a backup of the other output channel. A driver IC includes a plurality of output channels ch1 to chn, a plurality of output buffer circuits corresponding to each of the plurality of output channels ch1 to chn, and an output channel selection circuit, and when a malfunction occurs in the output buffer circuit of an effective channel, the output buffer circuit in which the malfunction occurs is automatically switched to the output buffer circuit of an ineffective channel so that the output of the signal from the effective channel is continued.1. A driver IC used to drive a liquid crystal display panel, comprising: a plurality of output channels outputting signals to each of a plurality of row wirings or plurality of column wirings in said liquid crystal display panel; a plurality of output buffer circuits corresponding to each of said plurality of output channels; and an output channel selection circuit selecting an output channel used to output a signal from said plurality of output channels in accordance with a preset number of channels, wherein said plurality of output channels include an effective channel selected by said output channel selection circuit and an ineffective channel other than said effective channel, and when a malfunction occurs in one of said output buffer circuits of said effective channel, said output buffer circuit in which said malfunction occurs is automatically switched to said output buffer circuit of said ineffective channel so that an output of a signal from said effective channel is continued. 2. The driver IC according to claim 1, further comprising: a malfunction detection circuit detecting a malfunction of said output buffer circuit; and a selector circuit, wherein when said malfunction detection circuit detects a malfunction of said output buffer circuit of said effective channel, said selector circuit switches said output buffer circuit in which said malfunction is detected to said output buffer circuit of said ineffective channel. 3. The driver IC according to claim 2, wherein said malfunction detection circuit detects a malfunction of said output buffer circuit based on a current consumed by said output buffer circuit. 4. The driver IC according to claim 2, wherein said malfunction detection circuit detects a malfunction of said output buffer circuit based on a voltage level of a signal being output by said output buffer circuit. 5. The driver IC according to claim 2, wherein said malfunction detection circuit detects a malfunction of said output buffer circuit based on a cycle of a signal being output by said output buffer circuit. 6. The driver IC according to claim 2, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 7. The driver IC according to claim 3, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 8. The driver IC according to claim 4, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 9. The driver IC according to claim 5, wherein said malfunction detection circuit is shared among said plurality of output buffer circuits. 10. A liquid crystal display apparatus, comprising: a driver IC according to claim 1; and a liquid crystal display panel driven by said driver IC. 11. A liquid crystal display apparatus, comprising: a plurality of driver ICs according to claim 1; and a liquid crystal display panel driven by said driver IC, wherein one of said plurality of driver ICs is set to a master mode, and other driver ICs are set to a slave mode, said driver ICs which are set to said slave mode operate in accordance with a control signal generated in said driver IC which is set to said master mode, and in each of said plurality of driver ICs, an operation of switching said output buffer circuit in which a malfunction is detected to said output buffer circuit of said ineffective channel is performed regardless of whether said driver IC is set to said master mode or said slave mode.
2,600
10,856
10,856
15,284,392
2,651
A method and system for communicating audio signals between an input device and an output device via a network. The output device can include loudspeakers and headphones. In some embodiments an output device, for example a center channel speaker, transmits audio signals to other output devices. In some embodiments, the output device is coupled to, or combined with, a speaker stand or speaker bracket. The network can be wireless, wired, infrared, RF, and powerline.
1. A method for providing an audio signal and a control signal that is generated by an input device to a remote loudspeaker via a network, the method comprising: receiving an audio signal from the input device; detecting a characteristic associated with the audio signal; coding the characteristic into a control signal; and transmitting the audio signal and the control signal to a loudspeaker via the network.
A method and system for communicating audio signals between an input device and an output device via a network. The output device can include loudspeakers and headphones. In some embodiments an output device, for example a center channel speaker, transmits audio signals to other output devices. In some embodiments, the output device is coupled to, or combined with, a speaker stand or speaker bracket. The network can be wireless, wired, infrared, RF, and powerline.1. A method for providing an audio signal and a control signal that is generated by an input device to a remote loudspeaker via a network, the method comprising: receiving an audio signal from the input device; detecting a characteristic associated with the audio signal; coding the characteristic into a control signal; and transmitting the audio signal and the control signal to a loudspeaker via the network.
2,600
10,857
10,857
15,217,547
2,688
An access and storage system including a generally enclosed storage compartment configured to be positioned on a ground surface such that a wheeled conveyance device carrying an item to be transferred is rollable directly into the storage compartment. The system includes a sensor system configured to track at least one of a placement, removal, presence or absence of the item or the wheeled conveyance relative to the storage compartment, and an access control system configured to control access to the storage compartment.
1. An access and storage system comprising: a generally enclosed storage compartment configured to be positioned on a ground surface such that a wheeled conveyance device carrying an item to be transferred is rollable directly into said storage compartment; a sensor system configured to track at least one of a placement, removal, presence or absence of said item or said wheeled conveyance relative to said storage compartment; and an access control system configured to control access to said storage compartment. 2. The system of claim 1 wherein said storage compartment is sized and configured such that a person can entirely enter said compartment to place said wheeled conveyance and said item into said storage compartment. 3. The system of claim 1 further comprising a processor for receiving an order relating to said item, and responsive thereto issuing a code to a user which said user can provide to said access control system to thereby access said storage compartment. 4. The system of claim 3 wherein said processor is configured to issue a supplemental code which a supplemental user can provide to said access control system to thereby access said storage compartment. 5. The system of claim 4 wherein said code and said supplemental code are different. 6. The system of claim 1 further comprising a processor configured to provide a notification to a user when said sensor system provides an output that said item to be transferred is determined to be positioned in said storage compartment. 7. The system of claim 1 wherein said storage compartment includes a collapsible storage shelf coupled to an upper portion of said storage compartment. 8. The system of claim 7 wherein said collapsible shelf, when collapsed, defines or is positioned immediately adjacent to a ceiling of said storage compartment, and when expanded defines a sub-compartment within said storage compartment. 9. The system of claim 8 wherein said system further includes a first door configured to provide access to said sub-compartment when said collapsible shelf is expanded and a second door configured to control access to a remainder of said compartment when said collapsible shelf is expanded. 10. The system of claim 9 wherein said first door and said second door are openable and closable independently of each other. 11. The system of claim 1 wherein said storage compartment is positioned on a ground surface such that said wheeled conveyance device carrying said item to be dispensed is from said ground surface outside said storage compartment directly rollable into said storage compartment. 12. A method for providing access to an item comprising: receiving an order for an item; placing said item on or in a wheeled conveyance device and rolling said wheeled conveyance device into a generally enclosed storage compartment, wherein the storage compartment includes or is associated with an access control system configured to control access to said storage compartment; and providing a code to a user which said user can enter into said access control system to thereby access and remove said item from said storage compartment. 13. The method of claim 12 wherein said storage compartment includes or is associated with a sensor system configured to track at least one of a placement, removal, presence or absence of said item or said wheeled conveyance relative to said storage compartment. 14. The method of claim 12 further wherein said item remains on or in said wheeled conveyance after said placing step until said user accesses said item. 15. The method of claim 12 further comprising the step of receiving said code from said user and responsive thereto enabling said user to access said storage compartment so that said user can remove said item from said storage compartment. 16. The method of claim 12 wherein said wheeled conveyance device is at a first elevation when positioned directly on a ground surface immediately prior to being rolled into storage compartment, and wherein said wheeled conveyance is at a second elevation after being rolled into said storage compartment, and wherein said first elevation differ from said second elevation by no more than about one inch. 17. The method of claim 12 wherein placing step includes a person entirely entering said compartment. 18. An access and storage system comprising: a generally enclosed storage compartment sized such that a person can entirely enter said compartment to place an item be transferred in said storage compartment; a sensor system configured to track at least one of a placement, removal, presence or absence of said item relative to said storage compartment; and an access control system configured to control access of a user to said storage compartment. 19. The system of claim 18 wherein said storage compartment is positioned on a ground surface such that a wheeled conveyance device carrying said item is rollable directly into said storage compartment. 20. An access and storage system comprising: a generally enclosed storage compartment positioned on a ground surface such that a wheeled conveyance device carrying an item to be transferred is rollable directly into said storage compartment; a collapsible storage shelf positioned in said storage component, wherein said shelf is configured such that, when collapsed, said shelf defines or is positioned immediately adjacent to a ceiling of said storage compartment, and when expanded defines a sub-compartment within said storage compartment; a first door configured to control access only to said sub-compartment when said collapsible shelf is expanded; and a second door configured to control access to a remainder of said compartment when said collapsible shelf is expanded.
An access and storage system including a generally enclosed storage compartment configured to be positioned on a ground surface such that a wheeled conveyance device carrying an item to be transferred is rollable directly into the storage compartment. The system includes a sensor system configured to track at least one of a placement, removal, presence or absence of the item or the wheeled conveyance relative to the storage compartment, and an access control system configured to control access to the storage compartment.1. An access and storage system comprising: a generally enclosed storage compartment configured to be positioned on a ground surface such that a wheeled conveyance device carrying an item to be transferred is rollable directly into said storage compartment; a sensor system configured to track at least one of a placement, removal, presence or absence of said item or said wheeled conveyance relative to said storage compartment; and an access control system configured to control access to said storage compartment. 2. The system of claim 1 wherein said storage compartment is sized and configured such that a person can entirely enter said compartment to place said wheeled conveyance and said item into said storage compartment. 3. The system of claim 1 further comprising a processor for receiving an order relating to said item, and responsive thereto issuing a code to a user which said user can provide to said access control system to thereby access said storage compartment. 4. The system of claim 3 wherein said processor is configured to issue a supplemental code which a supplemental user can provide to said access control system to thereby access said storage compartment. 5. The system of claim 4 wherein said code and said supplemental code are different. 6. The system of claim 1 further comprising a processor configured to provide a notification to a user when said sensor system provides an output that said item to be transferred is determined to be positioned in said storage compartment. 7. The system of claim 1 wherein said storage compartment includes a collapsible storage shelf coupled to an upper portion of said storage compartment. 8. The system of claim 7 wherein said collapsible shelf, when collapsed, defines or is positioned immediately adjacent to a ceiling of said storage compartment, and when expanded defines a sub-compartment within said storage compartment. 9. The system of claim 8 wherein said system further includes a first door configured to provide access to said sub-compartment when said collapsible shelf is expanded and a second door configured to control access to a remainder of said compartment when said collapsible shelf is expanded. 10. The system of claim 9 wherein said first door and said second door are openable and closable independently of each other. 11. The system of claim 1 wherein said storage compartment is positioned on a ground surface such that said wheeled conveyance device carrying said item to be dispensed is from said ground surface outside said storage compartment directly rollable into said storage compartment. 12. A method for providing access to an item comprising: receiving an order for an item; placing said item on or in a wheeled conveyance device and rolling said wheeled conveyance device into a generally enclosed storage compartment, wherein the storage compartment includes or is associated with an access control system configured to control access to said storage compartment; and providing a code to a user which said user can enter into said access control system to thereby access and remove said item from said storage compartment. 13. The method of claim 12 wherein said storage compartment includes or is associated with a sensor system configured to track at least one of a placement, removal, presence or absence of said item or said wheeled conveyance relative to said storage compartment. 14. The method of claim 12 further wherein said item remains on or in said wheeled conveyance after said placing step until said user accesses said item. 15. The method of claim 12 further comprising the step of receiving said code from said user and responsive thereto enabling said user to access said storage compartment so that said user can remove said item from said storage compartment. 16. The method of claim 12 wherein said wheeled conveyance device is at a first elevation when positioned directly on a ground surface immediately prior to being rolled into storage compartment, and wherein said wheeled conveyance is at a second elevation after being rolled into said storage compartment, and wherein said first elevation differ from said second elevation by no more than about one inch. 17. The method of claim 12 wherein placing step includes a person entirely entering said compartment. 18. An access and storage system comprising: a generally enclosed storage compartment sized such that a person can entirely enter said compartment to place an item be transferred in said storage compartment; a sensor system configured to track at least one of a placement, removal, presence or absence of said item relative to said storage compartment; and an access control system configured to control access of a user to said storage compartment. 19. The system of claim 18 wherein said storage compartment is positioned on a ground surface such that a wheeled conveyance device carrying said item is rollable directly into said storage compartment. 20. An access and storage system comprising: a generally enclosed storage compartment positioned on a ground surface such that a wheeled conveyance device carrying an item to be transferred is rollable directly into said storage compartment; a collapsible storage shelf positioned in said storage component, wherein said shelf is configured such that, when collapsed, said shelf defines or is positioned immediately adjacent to a ceiling of said storage compartment, and when expanded defines a sub-compartment within said storage compartment; a first door configured to control access only to said sub-compartment when said collapsible shelf is expanded; and a second door configured to control access to a remainder of said compartment when said collapsible shelf is expanded.
2,600
10,858
10,858
15,875,582
2,666
There is provided an image processing apparatus including an input device configured to receive a stroke input, and a display controller configured to control a displaying of a modified stroke, wherein the modified stroke is synthesized based on characteristic parameters of the received stroke input and characteristic parameters of a reference stroke that has been matched to the received stroke input.
1. An information processing apparatus, comprising: a display; a touch pad configured to receive a consecutive stroke input given by a user; and a processor configured to recognize the consecutive stroke input including at least two characters, and perform an enlargement processing or a contraction processing on the consecutive stroke input to align the at least two characters based on a bounding box surrounding a character in the consecutive stroke input. 2. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input to reposition the at least two characters to be aligned according to a new bounding box that is different than the bounding box that surrounded the character in the consecutive stroke input. 3. The information processing apparatus according to claim 2, wherein the new bounding box is located at a different position than the bounding box that surrounded the character in the consecutive stroke input. 4. The information processing apparatus according to claim 2, wherein the new bounding box and the bounding box that surrounded the character in the consecutive stroke input are of different size or shape. 5. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input to reposition the at least two characters to be aligned within new respective bounding boxes that have been resized and aligned. 6. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input based on a correction coefficient. 7. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input by normalizing characteristic parameters of the at least two characters of the received consecutive stroke input. 8. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input by interpolating characteristic parameters of the at least two characters of the received consecutive stroke input based on a difference in size or shape between the bounding box and the new bounding box. 9. The information processing apparatus according to claim 1, wherein the consecutive stroke input given by the user is a handwritten input that is input upon the touch pad. 10. The information processing apparatus according to claim 1, wherein the consecutive stroke input given by the user is a gesture input. 11. The information processing apparatus according to claim 1, wherein a stylistic characteristic of the consecutive stroke input given by the user is maintained as the processor performs the enlargement processing or the contraction processing on the consecutive stroke input. 12. An information processing method, comprising: receiving a consecutive stroke input given by a user on a touch pad; recognizing the consecutive stroke input including at least two characters; and performing an enlargement processing or a contraction processing on the consecutive stroke input to align the at least two characters based on a bounding box surrounding a character in the consecutive stroke input. 13. A non-transitory computer-readable medium having embodied thereon a program, which when executed by a computer causes the computer to perform an information processing method, the method comprising: receiving a consecutive stroke input given by a user on a touch pad; recognizing the consecutive stroke input including at least two characters; and performing an enlargement processing or a contraction processing on the consecutive stroke input to align the at least two characters based on a bounding box surrounding a character in the consecutive stroke input.
There is provided an image processing apparatus including an input device configured to receive a stroke input, and a display controller configured to control a displaying of a modified stroke, wherein the modified stroke is synthesized based on characteristic parameters of the received stroke input and characteristic parameters of a reference stroke that has been matched to the received stroke input.1. An information processing apparatus, comprising: a display; a touch pad configured to receive a consecutive stroke input given by a user; and a processor configured to recognize the consecutive stroke input including at least two characters, and perform an enlargement processing or a contraction processing on the consecutive stroke input to align the at least two characters based on a bounding box surrounding a character in the consecutive stroke input. 2. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input to reposition the at least two characters to be aligned according to a new bounding box that is different than the bounding box that surrounded the character in the consecutive stroke input. 3. The information processing apparatus according to claim 2, wherein the new bounding box is located at a different position than the bounding box that surrounded the character in the consecutive stroke input. 4. The information processing apparatus according to claim 2, wherein the new bounding box and the bounding box that surrounded the character in the consecutive stroke input are of different size or shape. 5. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input to reposition the at least two characters to be aligned within new respective bounding boxes that have been resized and aligned. 6. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input based on a correction coefficient. 7. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input by normalizing characteristic parameters of the at least two characters of the received consecutive stroke input. 8. The information processing apparatus according to claim 1, wherein the processor performs the enlargement processing or the contraction processing on the consecutive stroke input by interpolating characteristic parameters of the at least two characters of the received consecutive stroke input based on a difference in size or shape between the bounding box and the new bounding box. 9. The information processing apparatus according to claim 1, wherein the consecutive stroke input given by the user is a handwritten input that is input upon the touch pad. 10. The information processing apparatus according to claim 1, wherein the consecutive stroke input given by the user is a gesture input. 11. The information processing apparatus according to claim 1, wherein a stylistic characteristic of the consecutive stroke input given by the user is maintained as the processor performs the enlargement processing or the contraction processing on the consecutive stroke input. 12. An information processing method, comprising: receiving a consecutive stroke input given by a user on a touch pad; recognizing the consecutive stroke input including at least two characters; and performing an enlargement processing or a contraction processing on the consecutive stroke input to align the at least two characters based on a bounding box surrounding a character in the consecutive stroke input. 13. A non-transitory computer-readable medium having embodied thereon a program, which when executed by a computer causes the computer to perform an information processing method, the method comprising: receiving a consecutive stroke input given by a user on a touch pad; recognizing the consecutive stroke input including at least two characters; and performing an enlargement processing or a contraction processing on the consecutive stroke input to align the at least two characters based on a bounding box surrounding a character in the consecutive stroke input.
2,600
10,859
10,859
16,251,423
2,622
A vibration structure that includes a frame member having an opening therein; a vibration part within the opening in the frame member; a support part that connects the vibration part and the frame member and supports the vibration part within the opening in the frame member; a film that deforms in a planar direction in response to voltage application; and a connection member that connects the film to the vibration part and the frame member such that the vibration part vibrates in the planar direction when the film deforms in the planar direction.
1. A vibration structure comprising: a frame member having an opening therein; a vibration part within the opening in the frame member; a support part that connects the vibration part and the frame member and supports the vibration part within the opening in the frame member; a film that deforms in a planar direction in response to voltage application; and a connection member that connects the film to the vibration part and the frame member such that the vibration part vibrates in the planar direction when the film deforms in the planar direction. 2. The vibration structure according to claim 1, wherein the frame member, the vibration part, and the support part are formed of a same material. 3. The vibration structure according to claim 1, wherein the connection member has a thickness sufficient to prevent the film contacting the vibration part. 4. The vibration structure according to claim 1, wherein a length of the support part in a direction orthogonal to the planar direction is larger than a width thereof along the planar direction. 5. The vibration structure according to claim 1, further comprising a guide plate connected to the frame member and constructed to prevent deformation of the frame member in a direction normal thereto. 6. The vibration structure according to claim 5, wherein the vibration part is a first vibration part and the vibration structure further comprises a second vibration part that connected to a main face of the first vibration part and which vibrates together with the first vibration part. 7. The vibration structure according to claim 6, wherein the second vibration part includes a vibrationproof material. 8. The vibration structure according to claim 6, further comprising: an elastic body; and a case connected to the second vibration part via the elastic body. 9. The vibration structure according to claim 8, wherein the case is less susceptible to vibration than the second vibration part. 10. The vibration structure according to claim 1, wherein the vibration part is a first vibration part and the vibration structure further comprises a second vibration part that connected to a main face of the first vibration part and which vibrates together with the first vibration part. 11. The vibration structure according to claim 10, wherein the second vibration part includes a vibrationproof material. 12. The vibration structure according to claim 10, further comprising: an elastic body; and a case connected to the second vibration part via the elastic body. 13. The vibration structure according to claim 1, further comprising a soundproof sheet that covers the opening in the frame member. 14. The vibration structure according to claim 1, further comprising a display having an outer periphery retention frame, and wherein the frame member is connected to the outer periphery retention frame of the display. 15. The vibration structure according to claim 1, wherein the support part is a spring structure. 16. The vibration structure according to claim 1, wherein the vibration structure is constructed to bend in a direction perpendicular to a vibration direction of the vibration part. 17. A vibration device comprising: the vibration structure according to claim 1; and a driving circuit constructed to apply a driving signal to the film. 18. A tactile sense presentation device comprising: the vibration device according to claim 17; and a touch detection part that detects a touch operation imparted to the vibration part, wherein the driving circuit applies a driving signal to the film when the touch detection part detects the touch operation.
A vibration structure that includes a frame member having an opening therein; a vibration part within the opening in the frame member; a support part that connects the vibration part and the frame member and supports the vibration part within the opening in the frame member; a film that deforms in a planar direction in response to voltage application; and a connection member that connects the film to the vibration part and the frame member such that the vibration part vibrates in the planar direction when the film deforms in the planar direction.1. A vibration structure comprising: a frame member having an opening therein; a vibration part within the opening in the frame member; a support part that connects the vibration part and the frame member and supports the vibration part within the opening in the frame member; a film that deforms in a planar direction in response to voltage application; and a connection member that connects the film to the vibration part and the frame member such that the vibration part vibrates in the planar direction when the film deforms in the planar direction. 2. The vibration structure according to claim 1, wherein the frame member, the vibration part, and the support part are formed of a same material. 3. The vibration structure according to claim 1, wherein the connection member has a thickness sufficient to prevent the film contacting the vibration part. 4. The vibration structure according to claim 1, wherein a length of the support part in a direction orthogonal to the planar direction is larger than a width thereof along the planar direction. 5. The vibration structure according to claim 1, further comprising a guide plate connected to the frame member and constructed to prevent deformation of the frame member in a direction normal thereto. 6. The vibration structure according to claim 5, wherein the vibration part is a first vibration part and the vibration structure further comprises a second vibration part that connected to a main face of the first vibration part and which vibrates together with the first vibration part. 7. The vibration structure according to claim 6, wherein the second vibration part includes a vibrationproof material. 8. The vibration structure according to claim 6, further comprising: an elastic body; and a case connected to the second vibration part via the elastic body. 9. The vibration structure according to claim 8, wherein the case is less susceptible to vibration than the second vibration part. 10. The vibration structure according to claim 1, wherein the vibration part is a first vibration part and the vibration structure further comprises a second vibration part that connected to a main face of the first vibration part and which vibrates together with the first vibration part. 11. The vibration structure according to claim 10, wherein the second vibration part includes a vibrationproof material. 12. The vibration structure according to claim 10, further comprising: an elastic body; and a case connected to the second vibration part via the elastic body. 13. The vibration structure according to claim 1, further comprising a soundproof sheet that covers the opening in the frame member. 14. The vibration structure according to claim 1, further comprising a display having an outer periphery retention frame, and wherein the frame member is connected to the outer periphery retention frame of the display. 15. The vibration structure according to claim 1, wherein the support part is a spring structure. 16. The vibration structure according to claim 1, wherein the vibration structure is constructed to bend in a direction perpendicular to a vibration direction of the vibration part. 17. A vibration device comprising: the vibration structure according to claim 1; and a driving circuit constructed to apply a driving signal to the film. 18. A tactile sense presentation device comprising: the vibration device according to claim 17; and a touch detection part that detects a touch operation imparted to the vibration part, wherein the driving circuit applies a driving signal to the film when the touch detection part detects the touch operation.
2,600
10,860
10,860
15,229,932
2,611
Systems, apparatuses, and methods for implementing fine-grain power management for virtual reality (VR) systems are disclosed. A VR compositor monitors workload tasks while rendering and displaying content of a VR application. The VR compositor determines the priorities of different tasks of a given VR frame and cause power states to be assigned to processing units to match the priorities of the tasks being performed. For example, if a first task within a first frame period is assigned a high priority, a processing unit executing the task operates at a relatively high power performance state when performing the first task. If a second task within the first frame period is assigned a low priority, the processing unit operates at a relatively low power performance state when performing the second task. By implementing fine-grain power management in a VR environment, the likelihood of the processing unit suffering a thermal event or impaired performance is reduced.
1. A system comprising: one or more processors; and logic configured to: monitor execution of a plurality of tasks for rendering a given virtual reality (VR) frame; determine a priority of each task of the plurality of tasks; and assign a power performance state to the one or more processors based at least in part on the determined priority of each task. 2. The system as recited in claim 1, wherein the system is configured to: detect a first task and a second task of the plurality of tasks assigned to execute within a given VR frame period; utilize a first power performance state for the one or more processors while performing the first task; and utilize a second power performance state for the one or more processors while performing the second task, wherein the second power state is different from the first power state. 3. The system as recited in claim 2, wherein the system is configured to utilize the second power performance state for a plurality of tasks, wherein the second power performance state is a higher power state than the first power state, and wherein the second task corresponds to processing for an asynchronous timewarp. 4. The system as recited in claim 2, wherein: the first task corresponds to a first interval within the given VR frame period; the second task corresponds to a second interval within the given VR frame period; the first interval and second interval are non-overlapping intervals; and the first interval and second interval are specified in relation to a vertical synchronization signal. 5. The system as recited in claim 1, wherein the system is configured to: monitor an execution time of each task of the plurality of tasks; determine a latency requirement of each task; monitor power and thermal states of the one or more processors; and dynamically adjust a power performance state of the one or more processors based on one or more of an execution time of each task, latency requirement of each task, power state of the one or more processors, and thermal state of the one or more processors. 6. The system as recited in claim 1, wherein the system is configured to identify non-overlapping intervals within each rendered VR frame, wherein each non-overlapping interval corresponds to a separate task. 7. The system as recited in claim 1, wherein the one or more processors comprise one or more graphics processing units, and wherein the virtual reality frame is rendered for a head-mounted display. 8. A method comprising: monitoring execution of a plurality of tasks for rendering a given virtual reality (VR) frame; determining a priority of each task of the plurality of tasks; and assigning a power performance state to the one or more processors based at least in part on the determined priority of each task. 9. The method as recited in claim 8, further comprising: detecting a first task and a second task of the plurality of tasks assigned to execute within a given VR frame period; utilizing a first power performance state for the one or more processors while performing the first task; and utilizing a second power performance state for the one or more processors while performing the second task, wherein the second power state is different from the first power state. 10. The method as recited in claim 9, further comprising utilizing the second power performance state for a plurality of tasks, wherein the second power performance state is a higher power state than the first power state, and wherein the second task corresponds to processing for an asynchronous timewarp. 11. The method as recited in claim 9, wherein: the first task corresponds to a first interval within the given VR frame period; the second task corresponds to a second interval within the given VR frame period; the first interval and second interval are non-overlapping intervals; and the first interval and second interval are specified in relation to a vertical synchronization signal. 12. The method as recited in claim 8, further comprising: monitoring an execution time of each task of the plurality of tasks; determining a latency requirement of each task; monitoring power and thermal states of the one or more processors; and dynamically adjusting a power performance state of the one or more processors based on one or more of an execution time of each task, latency requirement of each task, power state of the one or more processors, and thermal state of the one or more processors. 13. The method as recited in claim 8, further comprising identifying non-overlapping intervals within each rendered VR frame, wherein each non-overlapping interval corresponds to a separate task. 14. The method as recited in claim 8, wherein the one or more processors comprise one or more graphics processing units, and wherein the virtual reality frame is rendered for a head-mounted display. 15. A non-transitory computer readable storage medium storing program instructions, wherein the program instructions are executable by a processor to: monitor execution of a plurality of tasks for rendering a given virtual reality (VR) frame; determine a priority of each task of the plurality of tasks; and assign a power performance state to the one or more processors based at least in part on the determined priority of each task. 16. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to: detect a first task and a second task of the plurality of tasks assigned to execute within a given VR frame period; utilize a first power performance state for the one or more processors while performing the first task; and utilize a second power performance state for the one or more processors while performing the second task, wherein the second power state is different from the first power state. 17. The non-transitory computer readable storage medium as recited in claim 16, wherein the program instructions are further executable by a processor to utilize the second power performance state for a plurality of tasks, wherein the second power performance state is a higher power state than the first power state, and wherein the second task corresponds to processing for an asynchronous timewarp. 18. The non-transitory computer readable storage medium as recited in claim 16, wherein: the first task corresponds to a first interval within the given VR frame period; the second task corresponds to a second interval within the given VR frame period; the first interval and second interval are non-overlapping intervals; and the first interval and second interval are specified in relation to a vertical synchronization signal. 19. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to: monitor an execution time of each task of the plurality of tasks; determine a latency requirement of each task; monitor power and thermal states of the one or more processors; and dynamically adjust a power performance state of the one or more processors based on one or more of an execution time of each task, latency requirement of each task, power state of the one or more processors, and thermal state of the one or more processors. 20. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to identify non-overlapping intervals within each rendered VR frame, wherein each non-overlapping interval corresponds to a separate task.
Systems, apparatuses, and methods for implementing fine-grain power management for virtual reality (VR) systems are disclosed. A VR compositor monitors workload tasks while rendering and displaying content of a VR application. The VR compositor determines the priorities of different tasks of a given VR frame and cause power states to be assigned to processing units to match the priorities of the tasks being performed. For example, if a first task within a first frame period is assigned a high priority, a processing unit executing the task operates at a relatively high power performance state when performing the first task. If a second task within the first frame period is assigned a low priority, the processing unit operates at a relatively low power performance state when performing the second task. By implementing fine-grain power management in a VR environment, the likelihood of the processing unit suffering a thermal event or impaired performance is reduced.1. A system comprising: one or more processors; and logic configured to: monitor execution of a plurality of tasks for rendering a given virtual reality (VR) frame; determine a priority of each task of the plurality of tasks; and assign a power performance state to the one or more processors based at least in part on the determined priority of each task. 2. The system as recited in claim 1, wherein the system is configured to: detect a first task and a second task of the plurality of tasks assigned to execute within a given VR frame period; utilize a first power performance state for the one or more processors while performing the first task; and utilize a second power performance state for the one or more processors while performing the second task, wherein the second power state is different from the first power state. 3. The system as recited in claim 2, wherein the system is configured to utilize the second power performance state for a plurality of tasks, wherein the second power performance state is a higher power state than the first power state, and wherein the second task corresponds to processing for an asynchronous timewarp. 4. The system as recited in claim 2, wherein: the first task corresponds to a first interval within the given VR frame period; the second task corresponds to a second interval within the given VR frame period; the first interval and second interval are non-overlapping intervals; and the first interval and second interval are specified in relation to a vertical synchronization signal. 5. The system as recited in claim 1, wherein the system is configured to: monitor an execution time of each task of the plurality of tasks; determine a latency requirement of each task; monitor power and thermal states of the one or more processors; and dynamically adjust a power performance state of the one or more processors based on one or more of an execution time of each task, latency requirement of each task, power state of the one or more processors, and thermal state of the one or more processors. 6. The system as recited in claim 1, wherein the system is configured to identify non-overlapping intervals within each rendered VR frame, wherein each non-overlapping interval corresponds to a separate task. 7. The system as recited in claim 1, wherein the one or more processors comprise one or more graphics processing units, and wherein the virtual reality frame is rendered for a head-mounted display. 8. A method comprising: monitoring execution of a plurality of tasks for rendering a given virtual reality (VR) frame; determining a priority of each task of the plurality of tasks; and assigning a power performance state to the one or more processors based at least in part on the determined priority of each task. 9. The method as recited in claim 8, further comprising: detecting a first task and a second task of the plurality of tasks assigned to execute within a given VR frame period; utilizing a first power performance state for the one or more processors while performing the first task; and utilizing a second power performance state for the one or more processors while performing the second task, wherein the second power state is different from the first power state. 10. The method as recited in claim 9, further comprising utilizing the second power performance state for a plurality of tasks, wherein the second power performance state is a higher power state than the first power state, and wherein the second task corresponds to processing for an asynchronous timewarp. 11. The method as recited in claim 9, wherein: the first task corresponds to a first interval within the given VR frame period; the second task corresponds to a second interval within the given VR frame period; the first interval and second interval are non-overlapping intervals; and the first interval and second interval are specified in relation to a vertical synchronization signal. 12. The method as recited in claim 8, further comprising: monitoring an execution time of each task of the plurality of tasks; determining a latency requirement of each task; monitoring power and thermal states of the one or more processors; and dynamically adjusting a power performance state of the one or more processors based on one or more of an execution time of each task, latency requirement of each task, power state of the one or more processors, and thermal state of the one or more processors. 13. The method as recited in claim 8, further comprising identifying non-overlapping intervals within each rendered VR frame, wherein each non-overlapping interval corresponds to a separate task. 14. The method as recited in claim 8, wherein the one or more processors comprise one or more graphics processing units, and wherein the virtual reality frame is rendered for a head-mounted display. 15. A non-transitory computer readable storage medium storing program instructions, wherein the program instructions are executable by a processor to: monitor execution of a plurality of tasks for rendering a given virtual reality (VR) frame; determine a priority of each task of the plurality of tasks; and assign a power performance state to the one or more processors based at least in part on the determined priority of each task. 16. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to: detect a first task and a second task of the plurality of tasks assigned to execute within a given VR frame period; utilize a first power performance state for the one or more processors while performing the first task; and utilize a second power performance state for the one or more processors while performing the second task, wherein the second power state is different from the first power state. 17. The non-transitory computer readable storage medium as recited in claim 16, wherein the program instructions are further executable by a processor to utilize the second power performance state for a plurality of tasks, wherein the second power performance state is a higher power state than the first power state, and wherein the second task corresponds to processing for an asynchronous timewarp. 18. The non-transitory computer readable storage medium as recited in claim 16, wherein: the first task corresponds to a first interval within the given VR frame period; the second task corresponds to a second interval within the given VR frame period; the first interval and second interval are non-overlapping intervals; and the first interval and second interval are specified in relation to a vertical synchronization signal. 19. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to: monitor an execution time of each task of the plurality of tasks; determine a latency requirement of each task; monitor power and thermal states of the one or more processors; and dynamically adjust a power performance state of the one or more processors based on one or more of an execution time of each task, latency requirement of each task, power state of the one or more processors, and thermal state of the one or more processors. 20. The non-transitory computer readable storage medium as recited in claim 15, wherein the program instructions are further executable by a processor to identify non-overlapping intervals within each rendered VR frame, wherein each non-overlapping interval corresponds to a separate task.
2,600
10,861
10,861
16,395,073
2,631
A method of a user equipment (UE) for a channel state information (CSI) feedback in a wireless communication system is provided. The method comprises receiving, from a base station (BS), CSI feedback configuration information including a number (K 0 ) of coefficients for the CSI feedback, deriving, based on the CSI feedback configuration information, the CSI feedback including K 1 coefficients that are a subset of a total of Q coefficients, wherein K 1 ≤K 0 and K 0 <Q, and transmitting, to the BS, the CSI feedback including the K 1 coefficients over an uplink channel.
1. A user equipment (UE) for a channel state information (CSI) feedback in a wireless communication system, the UE comprising: a transceiver configured to receive, from a base station (BS), CSI feedback configuration information including a number (K0) of coefficients for the CSI feedback; and a processor operably connected to the transceiver, the processor configured to derive, based on the CSI feedback configuration information, the CSI feedback including K1 coefficients that are a subset of a total of Q coefficients, wherein K1≤K0 and K0<Q, wherein the transceiver is further configured to transmit, to the BS, the CSI feedback including the K1 coefficients over an uplink channel. 2. The UE of claim 1, wherein: Q=2LM; the total of Q coefficients forms a 2L×M coefficient matrix Cl comprising 2L rows and M columns; the K1 coefficients correspond to non-zero coefficients of the 2L×M coefficient matrix Cl; and the remaining 2LM−K1 coefficients of the 2L×M coefficient matrix Cl are zero. 3. The UE of claim 1, wherein: the processor is further configured to determine a number K1; and the transceiver is further configured to transmit, to the BS, the CSI feedback including the number K1. 4. The UE of claim 2, wherein: the processor is further configured to determine a bit sequence B=b0b1 . . . b2LM−1 comprising 2LM bits to indicate indices of the K1 coefficients; and the transceiver is further configured to transmit, to the BS, the CSI feedback including the bit sequence B, where the bit sequence B comprises K1 ones and 2LM−K1 zeros, and an i-th bit bi of the bit sequence B is set to one when an i-th coefficient of the total of 2LM coefficients is included in the K1 coefficients. 5. The UE of claim 2, wherein: the K0 is determined as K0=┌a×2LM┐ where a≤1; and an a is configured via higher layer signaling. 6. The UE of claim 5, wherein a is configured from a set of values including {¼, ½}. 7. The UE of claim 2, wherein the CSI feedback includes a precoding matrix indicator (PMI) indicating the 2L×M coefficient matrix Cl, a spatial domain (SD) basis matrix Al and a frequency domain (FD) basis matrix Bl for each l=1, . . . , ν, and wherein: l is a layer index with a range of l=1, . . . ν, ν is an associated rank indicator (RI) value, a precoding matrix for each FD unit of a total number (N3) of FD units is determined by columns of W = 1 v  [ W 1 W 2 … W v ]   where W l = [ A l 0 0 A l ]  C l  B l H = [ ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i , k  ( a l , i  b l , k H ) ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i + L , k  ( a l , i  b l , k H ) ] , Al=[al,0 al,1 . . . al,L−1], al,i is a N1N2×1 column vector for SD antenna ports where N1 and N2 are number of antenna ports, respectively, with a same antenna polarization in a first and a second dimensions of a two-dimensional dual-polarized channel state information-reference signal (CSI-RS) antenna ports at the BS; Bl=[bl,0 bl,1 . . . bl,M−1], bl,k is a N3×1 column vector for FD units; the 2L×M matrix Cl comprises coefficients cl,i,k; and a number (L) of column vectors for the SD antenna ports, a number (M) of column vectors for the FD units, and the total number (N3) of the FD units are configured via higher layer signaling. 8. A base station (BS) for a channel state information (CSI) feedback in a wireless communication system, the BS comprising: a transceiver configured to: transmit, to a user equipment (UE), CSI feedback configuration information including a number (K0) of coefficients for the CSI feedback; and receive, from the UE, the CSI feedback including K1 coefficients over an uplink channel; and a processor operably connected to the transceiver, the processor is configured to decode the CSI feedback including K1 coefficients, wherein: the CSI feedback is derived based on the CSI feedback configuration information; and the CSI feedback includes the K1 coefficients that are a subset of a total of Q coefficients, wherein K1≤K0 and K0<Q. 9. The BS of claim 8, wherein: Q=2LM; the total of Q coefficients forms a 2L×M coefficient matrix Cl comprising 2L rows and M columns; the K1 coefficients correspond to non-zero coefficients of the 2L×M coefficient matrix Cl; and the remaining 2LM−K1 coefficients of the 2L×M coefficient matrix Cl are zero. 10. The BS of claim 8, wherein the transceiver is further configured to receive, from the UE, the CSI feedback including a number K1. 11. The BS of claim 9, wherein: the transceiver is further configured to receive, from the UE, the CSI feedback including a bit sequence B=b0b1 . . . b2LM−1 comprising 2LM bits to indicate indices of the K1 coefficients, where the bit sequence B comprises K1 ones and 2LM−K1 zeros, and an i-th bit bi of the bit sequence B is set to one when an i-th coefficient of the total of 2LM coefficients is included in the K1 coefficients. 12. The BS of claim 9, wherein: the K0 is determined as K0=┌a×2LM┐ where a≤1; and an a is configured via higher layer signaling. 13. The BS of claim 12, wherein a is configured from a set of values including {¼, ½}. 14. The BS of claim 9, wherein the CSI feedback includes a precoding matrix indicator (PMI) indicating the 2L×M coefficient matrix Cl, a spatial domain (SD) basis matrix Al and a frequency domain (FD) basis matrix Bl for each l=1, . . . , ν, and wherein: l is a layer index with a range of l=1, . . . , ν, ν is an associated rank indicator (RI) value, a precoding matrix for each FD unit of a total number (N3) of FD units is determined by columns of W = 1 v  [ W 1 W 2 … W v ]   where W l = [ A l 0 0 A l ]  C l  B l H = [ ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i , k  ( a l , i  b l , k H ) ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i + L , k  ( a l , i  b l , k H ) ] , Al=[al,0 al,1 . . . al,L−1], al,i is a N1N2×1 column vector for SD antenna ports where N1 and N2 are number of antenna ports, respectively, with a same antenna polarization in a first and a second dimensions of a two-dimensional dual-polarized channel state information-reference signal (CSI-RS) antenna ports at the BS; Bl=[bl,0 bl,1 . . . bl,M−1], bl,k is a N3×1 column vector for FD units; the 2L×M matrix Cl comprises coefficients cl,i,k; and a number (L) of column vectors for the SD antenna ports, a number (M) of column vectors for the FD units, and the total number (N3) of the FD units are configured via higher layer signaling. 15. A method of a user equipment (UE) for a channel state information (CSI) feedback in a wireless communication system, the method comprising: receiving, from a base station (BS), CSI feedback configuration information including a number (K0) of coefficients for the CSI feedback; deriving, based on the CSI feedback configuration information, the CSI feedback including K1 coefficients that are a subset of a total of Q coefficients, wherein K1≤K0 and K0<Q; and transmitting, to the BS, the CSI feedback including the K1 coefficients over an uplink channel. 16. The method of claim 15, wherein: Q=2LM; the total of Q coefficients forms a 2L×M coefficient matrix Cl comprising 2L rows and M columns; the K1 coefficients correspond to non-zero coefficients of the 2L×M coefficient matrix Cl; and the remaining 2LM−K1 coefficients of the 2L×M coefficient matrix Cl are zero. 17. The method of claim 15, further comprising: determining a number K1; and transmitting, to the BS, the CSI feedback including the number K1. 18. The method of claim 16, further comprising: determining a bit sequence B=b0b1 . . . b2LM−1 comprising 2LM bits to indicate indices of the K1 coefficient; and transmitting, to the BS, the CSI feedback including the bit sequence B, where the bit sequence B comprises K1 ones and 2LM−K1 zeros, and an i-th bit bi of the bit sequence B is set to one when an i-th coefficient of the total of 2LM coefficients is included in the K1 coefficients. 19. The method of claim 16, wherein: the K0 is determined as K0=┌a×2LM┐ where a≤1; an a is configured via higher layer signaling; and the a is configured from a set of values including {¼, ½}. 20. The method of claim 16, wherein the CSI feedback includes a precoding matrix indicator (PMI) indicating the 2L×M coefficient matrix Cl, a spatial domain (SD) basis matrix Al and a frequency domain (FD) basis matrix Bl for each l=1, . . . , ν, and wherein: l is a layer index with a range of l=1, . . . , ν, ν is an associated rank indicator (RI) value, a precoding matrix for each FD unit of a total number (N3) of FD units is determined by columns of W = 1 v  [ W 1 W 2 … W v ]   where W l = [ A l 0 0 A l ]  C l  B l H = [ ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i , k  ( a l , i  b l , k H ) ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i + L , k  ( a l , i  b l , k H ) ] , Al=[al,0 al,1 . . . al,L−1], al,i is a N1N2×1 column vector for SD antenna ports where N1 and N2 are number of antenna ports, respectively, with a same antenna polarization in a first and a second dimensions of a two-dimensional dual-polarized channel state information-reference signal (CSI-RS) antenna ports at the BS; Bl=[bl,0 bl,1 . . . bl,M−1], bl,k is a N3×1 column vector for FD units; the 2L×M matrix Cl comprises coefficients cl,i,k; and a number (L) of column vectors for the SD antenna ports, a number (M) of column vectors for the FD units, and the total number (N3) of the FD units are configured via higher layer signaling.
A method of a user equipment (UE) for a channel state information (CSI) feedback in a wireless communication system is provided. The method comprises receiving, from a base station (BS), CSI feedback configuration information including a number (K 0 ) of coefficients for the CSI feedback, deriving, based on the CSI feedback configuration information, the CSI feedback including K 1 coefficients that are a subset of a total of Q coefficients, wherein K 1 ≤K 0 and K 0 <Q, and transmitting, to the BS, the CSI feedback including the K 1 coefficients over an uplink channel.1. A user equipment (UE) for a channel state information (CSI) feedback in a wireless communication system, the UE comprising: a transceiver configured to receive, from a base station (BS), CSI feedback configuration information including a number (K0) of coefficients for the CSI feedback; and a processor operably connected to the transceiver, the processor configured to derive, based on the CSI feedback configuration information, the CSI feedback including K1 coefficients that are a subset of a total of Q coefficients, wherein K1≤K0 and K0<Q, wherein the transceiver is further configured to transmit, to the BS, the CSI feedback including the K1 coefficients over an uplink channel. 2. The UE of claim 1, wherein: Q=2LM; the total of Q coefficients forms a 2L×M coefficient matrix Cl comprising 2L rows and M columns; the K1 coefficients correspond to non-zero coefficients of the 2L×M coefficient matrix Cl; and the remaining 2LM−K1 coefficients of the 2L×M coefficient matrix Cl are zero. 3. The UE of claim 1, wherein: the processor is further configured to determine a number K1; and the transceiver is further configured to transmit, to the BS, the CSI feedback including the number K1. 4. The UE of claim 2, wherein: the processor is further configured to determine a bit sequence B=b0b1 . . . b2LM−1 comprising 2LM bits to indicate indices of the K1 coefficients; and the transceiver is further configured to transmit, to the BS, the CSI feedback including the bit sequence B, where the bit sequence B comprises K1 ones and 2LM−K1 zeros, and an i-th bit bi of the bit sequence B is set to one when an i-th coefficient of the total of 2LM coefficients is included in the K1 coefficients. 5. The UE of claim 2, wherein: the K0 is determined as K0=┌a×2LM┐ where a≤1; and an a is configured via higher layer signaling. 6. The UE of claim 5, wherein a is configured from a set of values including {¼, ½}. 7. The UE of claim 2, wherein the CSI feedback includes a precoding matrix indicator (PMI) indicating the 2L×M coefficient matrix Cl, a spatial domain (SD) basis matrix Al and a frequency domain (FD) basis matrix Bl for each l=1, . . . , ν, and wherein: l is a layer index with a range of l=1, . . . ν, ν is an associated rank indicator (RI) value, a precoding matrix for each FD unit of a total number (N3) of FD units is determined by columns of W = 1 v  [ W 1 W 2 … W v ]   where W l = [ A l 0 0 A l ]  C l  B l H = [ ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i , k  ( a l , i  b l , k H ) ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i + L , k  ( a l , i  b l , k H ) ] , Al=[al,0 al,1 . . . al,L−1], al,i is a N1N2×1 column vector for SD antenna ports where N1 and N2 are number of antenna ports, respectively, with a same antenna polarization in a first and a second dimensions of a two-dimensional dual-polarized channel state information-reference signal (CSI-RS) antenna ports at the BS; Bl=[bl,0 bl,1 . . . bl,M−1], bl,k is a N3×1 column vector for FD units; the 2L×M matrix Cl comprises coefficients cl,i,k; and a number (L) of column vectors for the SD antenna ports, a number (M) of column vectors for the FD units, and the total number (N3) of the FD units are configured via higher layer signaling. 8. A base station (BS) for a channel state information (CSI) feedback in a wireless communication system, the BS comprising: a transceiver configured to: transmit, to a user equipment (UE), CSI feedback configuration information including a number (K0) of coefficients for the CSI feedback; and receive, from the UE, the CSI feedback including K1 coefficients over an uplink channel; and a processor operably connected to the transceiver, the processor is configured to decode the CSI feedback including K1 coefficients, wherein: the CSI feedback is derived based on the CSI feedback configuration information; and the CSI feedback includes the K1 coefficients that are a subset of a total of Q coefficients, wherein K1≤K0 and K0<Q. 9. The BS of claim 8, wherein: Q=2LM; the total of Q coefficients forms a 2L×M coefficient matrix Cl comprising 2L rows and M columns; the K1 coefficients correspond to non-zero coefficients of the 2L×M coefficient matrix Cl; and the remaining 2LM−K1 coefficients of the 2L×M coefficient matrix Cl are zero. 10. The BS of claim 8, wherein the transceiver is further configured to receive, from the UE, the CSI feedback including a number K1. 11. The BS of claim 9, wherein: the transceiver is further configured to receive, from the UE, the CSI feedback including a bit sequence B=b0b1 . . . b2LM−1 comprising 2LM bits to indicate indices of the K1 coefficients, where the bit sequence B comprises K1 ones and 2LM−K1 zeros, and an i-th bit bi of the bit sequence B is set to one when an i-th coefficient of the total of 2LM coefficients is included in the K1 coefficients. 12. The BS of claim 9, wherein: the K0 is determined as K0=┌a×2LM┐ where a≤1; and an a is configured via higher layer signaling. 13. The BS of claim 12, wherein a is configured from a set of values including {¼, ½}. 14. The BS of claim 9, wherein the CSI feedback includes a precoding matrix indicator (PMI) indicating the 2L×M coefficient matrix Cl, a spatial domain (SD) basis matrix Al and a frequency domain (FD) basis matrix Bl for each l=1, . . . , ν, and wherein: l is a layer index with a range of l=1, . . . , ν, ν is an associated rank indicator (RI) value, a precoding matrix for each FD unit of a total number (N3) of FD units is determined by columns of W = 1 v  [ W 1 W 2 … W v ]   where W l = [ A l 0 0 A l ]  C l  B l H = [ ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i , k  ( a l , i  b l , k H ) ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i + L , k  ( a l , i  b l , k H ) ] , Al=[al,0 al,1 . . . al,L−1], al,i is a N1N2×1 column vector for SD antenna ports where N1 and N2 are number of antenna ports, respectively, with a same antenna polarization in a first and a second dimensions of a two-dimensional dual-polarized channel state information-reference signal (CSI-RS) antenna ports at the BS; Bl=[bl,0 bl,1 . . . bl,M−1], bl,k is a N3×1 column vector for FD units; the 2L×M matrix Cl comprises coefficients cl,i,k; and a number (L) of column vectors for the SD antenna ports, a number (M) of column vectors for the FD units, and the total number (N3) of the FD units are configured via higher layer signaling. 15. A method of a user equipment (UE) for a channel state information (CSI) feedback in a wireless communication system, the method comprising: receiving, from a base station (BS), CSI feedback configuration information including a number (K0) of coefficients for the CSI feedback; deriving, based on the CSI feedback configuration information, the CSI feedback including K1 coefficients that are a subset of a total of Q coefficients, wherein K1≤K0 and K0<Q; and transmitting, to the BS, the CSI feedback including the K1 coefficients over an uplink channel. 16. The method of claim 15, wherein: Q=2LM; the total of Q coefficients forms a 2L×M coefficient matrix Cl comprising 2L rows and M columns; the K1 coefficients correspond to non-zero coefficients of the 2L×M coefficient matrix Cl; and the remaining 2LM−K1 coefficients of the 2L×M coefficient matrix Cl are zero. 17. The method of claim 15, further comprising: determining a number K1; and transmitting, to the BS, the CSI feedback including the number K1. 18. The method of claim 16, further comprising: determining a bit sequence B=b0b1 . . . b2LM−1 comprising 2LM bits to indicate indices of the K1 coefficient; and transmitting, to the BS, the CSI feedback including the bit sequence B, where the bit sequence B comprises K1 ones and 2LM−K1 zeros, and an i-th bit bi of the bit sequence B is set to one when an i-th coefficient of the total of 2LM coefficients is included in the K1 coefficients. 19. The method of claim 16, wherein: the K0 is determined as K0=┌a×2LM┐ where a≤1; an a is configured via higher layer signaling; and the a is configured from a set of values including {¼, ½}. 20. The method of claim 16, wherein the CSI feedback includes a precoding matrix indicator (PMI) indicating the 2L×M coefficient matrix Cl, a spatial domain (SD) basis matrix Al and a frequency domain (FD) basis matrix Bl for each l=1, . . . , ν, and wherein: l is a layer index with a range of l=1, . . . , ν, ν is an associated rank indicator (RI) value, a precoding matrix for each FD unit of a total number (N3) of FD units is determined by columns of W = 1 v  [ W 1 W 2 … W v ]   where W l = [ A l 0 0 A l ]  C l  B l H = [ ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i , k  ( a l , i  b l , k H ) ∑ k = 0 M - 1  ∑ i = 0 L - 1  c l , i + L , k  ( a l , i  b l , k H ) ] , Al=[al,0 al,1 . . . al,L−1], al,i is a N1N2×1 column vector for SD antenna ports where N1 and N2 are number of antenna ports, respectively, with a same antenna polarization in a first and a second dimensions of a two-dimensional dual-polarized channel state information-reference signal (CSI-RS) antenna ports at the BS; Bl=[bl,0 bl,1 . . . bl,M−1], bl,k is a N3×1 column vector for FD units; the 2L×M matrix Cl comprises coefficients cl,i,k; and a number (L) of column vectors for the SD antenna ports, a number (M) of column vectors for the FD units, and the total number (N3) of the FD units are configured via higher layer signaling.
2,600
10,862
10,862
15,675,645
2,612
Techniques for gradually transitioning a user to a second navigation scheme while using a first navigation scheme in a 3D design application that generates and displays a 3D virtual environment. The design application initially implements the first navigation scheme and a set of function tools of the standard navigation scheme. The design application monitors for a set of patterns of navigation actions during use of the first-person navigation scheme, each pattern being performed more efficiently when using the standard navigation scheme. Upon detecting a pattern using the first-person navigation scheme, the design application may switch to the standard navigation scheme. Also, upon detecting selection of a function tool, the design application may switch to the standard navigation scheme during use of the function tool. When the function tool is closed, the design application may switch back to the first-person navigation scheme.
1. A computer-implemented method for navigating a three-dimensional (3D) virtual environment that includes one or more objects, the method comprising: enabling a first navigation scheme for navigating the 3D virtual environment; receiving a plurality of inputs based on the first navigation scheme that cause a first set of navigation actions to occur within the 3D virtual environment; based on a set of navigation patterns, determining that the first set of navigation actions comprises a particular navigation pattern; and in response, enabling a second navigation scheme for navigating the 3D virtual environment. 2. The computer-implemented method of claim 1, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different distances. 3. The computer-implemented method of claim 1, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different angles. 4. The computer-implemented method of claim 1, further comprising receiving one or more inputs based on the second navigation scheme that cause a second set of navigation actions to occur within the 3D virtual environment. 5. The computer-implemented method of claim 1, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying an estimate of an amount of time that is saved using the second navigation scheme, instead of the first navigation scheme, to perform the first set of navigation actions. 6. The computer-implemented method of claim 1, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying a prompt to invoke the second navigation scheme instead of the first navigation scheme. 7. The computer-implemented method of claim 1, wherein: the first navigation scheme comprises a first-person navigation scheme that invokes camera position and camera orientation tools; and the second navigation scheme comprises a standard navigation scheme that invokes orbit, pan, and zoom tools. 8. The computer-implemented method of claim 1, wherein the 3D virtual environment is generated by a computer-aided design application. 9. The computer-implemented method of claim 1, further comprising, before determining that the first set of navigation actions comprises the particular navigation pattern, determining that the navigation actions included in the first set of navigation actions focus on a same object for a threshold period of time. 10. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the steps of: enabling a first navigation scheme for navigating a 3D virtual environment comprising one or more objects; receiving a plurality of inputs based on the first navigation scheme that cause a first set of navigation actions to occur within the 3D virtual environment; based on a set of navigation patterns, determining that the first set of navigation actions comprises a particular navigation pattern; and in response, enabling a second navigation scheme for navigating the 3D virtual environment. 11. The non-transitory computer-readable medium of claim 10, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different distances. 12. The non-transitory computer-readable medium of claim 10, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different angles. 13. The non-transitory computer-readable medium of claim 10, further comprising receiving one or more inputs based on the second navigation scheme that cause a second set of navigation actions to occur within the 3D virtual environment. 14. The non-transitory computer-readable medium of claim 10, wherein: the first navigation scheme comprises a first-person navigation scheme; and the second navigation scheme comprises an object-centric navigation scheme. 15. The non-transitory computer-readable medium of claim 10, wherein the 3D virtual environment is generated by a computer-aided design application. 16. The non-transitory computer-readable medium of claim 10, further comprising, before determining that the first set of navigation actions comprises the particular navigation pattern, determining that the navigation actions included in the first set of navigation actions focus on a same object for a threshold period of time. 17. The non-transitory computer-readable medium of claim 10, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying a statement informing the user that recent navigation actions are performed more efficiently using the second navigation scheme. 18. The non-transitory computer-readable medium of claim 10, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying a prompt for displaying tutorial information for the second navigation scheme. 19. A system, comprising: a memory that includes a design engine; and a processor that is coupled to the memory and, when executing the design engine, performs the steps of: enabling a first navigation scheme for navigating a three-dimensional (3D) virtual environment comprising one or more objects; receiving a plurality of inputs based on the first navigation scheme that cause a first set of navigation actions to occur within the 3D virtual environment; based on a set of navigation patterns, determining that the first set of navigation actions comprises a particular navigation pattern; and in response, enabling a second navigation scheme for navigating the 3D virtual environment. 20. The system of claim 19, further comprising, before determining that the first set of navigation actions comprises the particular navigation pattern, determining that the navigation actions included in the first set of navigation actions focus on a same object for a threshold period of time.
Techniques for gradually transitioning a user to a second navigation scheme while using a first navigation scheme in a 3D design application that generates and displays a 3D virtual environment. The design application initially implements the first navigation scheme and a set of function tools of the standard navigation scheme. The design application monitors for a set of patterns of navigation actions during use of the first-person navigation scheme, each pattern being performed more efficiently when using the standard navigation scheme. Upon detecting a pattern using the first-person navigation scheme, the design application may switch to the standard navigation scheme. Also, upon detecting selection of a function tool, the design application may switch to the standard navigation scheme during use of the function tool. When the function tool is closed, the design application may switch back to the first-person navigation scheme.1. A computer-implemented method for navigating a three-dimensional (3D) virtual environment that includes one or more objects, the method comprising: enabling a first navigation scheme for navigating the 3D virtual environment; receiving a plurality of inputs based on the first navigation scheme that cause a first set of navigation actions to occur within the 3D virtual environment; based on a set of navigation patterns, determining that the first set of navigation actions comprises a particular navigation pattern; and in response, enabling a second navigation scheme for navigating the 3D virtual environment. 2. The computer-implemented method of claim 1, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different distances. 3. The computer-implemented method of claim 1, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different angles. 4. The computer-implemented method of claim 1, further comprising receiving one or more inputs based on the second navigation scheme that cause a second set of navigation actions to occur within the 3D virtual environment. 5. The computer-implemented method of claim 1, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying an estimate of an amount of time that is saved using the second navigation scheme, instead of the first navigation scheme, to perform the first set of navigation actions. 6. The computer-implemented method of claim 1, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying a prompt to invoke the second navigation scheme instead of the first navigation scheme. 7. The computer-implemented method of claim 1, wherein: the first navigation scheme comprises a first-person navigation scheme that invokes camera position and camera orientation tools; and the second navigation scheme comprises a standard navigation scheme that invokes orbit, pan, and zoom tools. 8. The computer-implemented method of claim 1, wherein the 3D virtual environment is generated by a computer-aided design application. 9. The computer-implemented method of claim 1, further comprising, before determining that the first set of navigation actions comprises the particular navigation pattern, determining that the navigation actions included in the first set of navigation actions focus on a same object for a threshold period of time. 10. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the steps of: enabling a first navigation scheme for navigating a 3D virtual environment comprising one or more objects; receiving a plurality of inputs based on the first navigation scheme that cause a first set of navigation actions to occur within the 3D virtual environment; based on a set of navigation patterns, determining that the first set of navigation actions comprises a particular navigation pattern; and in response, enabling a second navigation scheme for navigating the 3D virtual environment. 11. The non-transitory computer-readable medium of claim 10, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different distances. 12. The non-transitory computer-readable medium of claim 10, wherein the particular navigation pattern comprises viewing a first object included within the 3D virtual environment at different angles. 13. The non-transitory computer-readable medium of claim 10, further comprising receiving one or more inputs based on the second navigation scheme that cause a second set of navigation actions to occur within the 3D virtual environment. 14. The non-transitory computer-readable medium of claim 10, wherein: the first navigation scheme comprises a first-person navigation scheme; and the second navigation scheme comprises an object-centric navigation scheme. 15. The non-transitory computer-readable medium of claim 10, wherein the 3D virtual environment is generated by a computer-aided design application. 16. The non-transitory computer-readable medium of claim 10, further comprising, before determining that the first set of navigation actions comprises the particular navigation pattern, determining that the navigation actions included in the first set of navigation actions focus on a same object for a threshold period of time. 17. The non-transitory computer-readable medium of claim 10, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying a statement informing the user that recent navigation actions are performed more efficiently using the second navigation scheme. 18. The non-transitory computer-readable medium of claim 10, further comprising, in response to determining that the first set of navigation actions comprises the particular navigation pattern, displaying a prompt for displaying tutorial information for the second navigation scheme. 19. A system, comprising: a memory that includes a design engine; and a processor that is coupled to the memory and, when executing the design engine, performs the steps of: enabling a first navigation scheme for navigating a three-dimensional (3D) virtual environment comprising one or more objects; receiving a plurality of inputs based on the first navigation scheme that cause a first set of navigation actions to occur within the 3D virtual environment; based on a set of navigation patterns, determining that the first set of navigation actions comprises a particular navigation pattern; and in response, enabling a second navigation scheme for navigating the 3D virtual environment. 20. The system of claim 19, further comprising, before determining that the first set of navigation actions comprises the particular navigation pattern, determining that the navigation actions included in the first set of navigation actions focus on a same object for a threshold period of time.
2,600
10,863
10,863
15,677,520
2,616
A system to facilitate AR processing includes receiving captured media from a user device and context information relating to media that is being delivered to a receiving device. The system may use the media being delivered with the captured media to generate one or more virtual objects. The user device may augment a user's view of reality that is reflected in the captured media by overlaying or otherwise incorporating the virtual objects in the user's view of reality.
1. A computer-implemented method for augmented reality comprising a computer performing: accessing first context information that is based on content in a delivered media stream that is being delivered to a receiving device; accessing second context information that is based on content in a captured media stream, the second context information representative of at least one of: a physical location of a user device, a physical object within a field of view of the user device, and a result generated from a content analysis of the captured media stream; determining a first virtual object using at least the first context information, wherein the first virtual object is based on the content in the delivered media stream; determining, using at least the second context information, and based at least on the content in the captured media stream, transformational information comprising at least one transformation to be performed on the first virtual object; and providing, to the user device, information representative of the first virtual object and the transformational information, for enabling the user device to display an augmented field of view based on the field of view, wherein the field of view of the user device is augmented with one or more images of the first virtual object by rendering the one or more images of the first virtual object using the information representative of the first virtual object as transformed using the transformational information. 2. (canceled) 3. The computer-implemented method of claim 1 wherein the first virtual object is a representation of an object that appears in the delivered media stream. 4. The computer-implemented method of claim 1 wherein the first virtual object does not appear in the delivered media stream but is related to one or more objects that appear in the delivered media stream. 5. The computer-implemented method of claim 1 wherein the augmented field of view comprises a presentation of the captured media stream on the user device, wherein the presentation of the captured media stream includes the one or more images of the first virtual object rendered using the information representative of the first virtual object and the transformational information. 6. The computer-implemented method of claim 1 wherein the field of view of the neighborhood is seen using transparent eye pieces having active display elements disposed thereon to render the one or more images of the first virtual object. 7. The computer-implemented method of claim 1 wherein the captured media stream is associated with a timeline, the method further comprising determining a first time along the timeline, wherein the field of view of the neighborhood seen using the user device includes the one or more images of the first virtual object rendered at a time that is based on the first time. 8. (canceled) 9. The computer-implemented method of claim 1 wherein determining a first virtual object further includes determining the first virtual object using the second context information. 10. The computer-implemented method of claim 1 wherein determining a first virtual object is further based on events identified in the content of the delivered media stream or events identified in the content of the captured media stream. 11. (canceled) 12. The computer-implemented method of claim 1 further comprising obtaining the information representative of the first virtual object from one or more of: the content in the delivered media stream, a data store of images, and a computer-generated representation of the first virtual object. 13. The computer-implemented method of claim 12 wherein the information representative of the first virtual object is an image or audio that is output on the user device. 14. The computer-implemented method of claim 1 wherein accessing the first context information comprises receiving the first context information from a media server that is delivering the delivered media stream to the receiving device. 15. The computer-implemented method of claim 1 wherein accessing the second context information comprises receiving the second context information from the user device. 16. A computer device comprising: a processing device; a non-transitory memory having stored thereon computer-executable program code; a display device; and an image capturing device, wherein, when the processing device executes the computer-executable program code, the processing device: controls the image capturing device to generate a first media stream; receives the first media stream from the image capturing device; provides information relating to the first media stream to a server system separate from the computer device, the information representative of at least one of: a physical location of the image capturing device, a physical object within a field of view of the image capturing device, and a result generated from a content analysis of the first media stream; receives from the server system augmented reality (AR) data representative of virtual objects identified based on content in the first media stream and on content in a second media stream different from the first media stream; generates images of the virtual objects using the AR data and using transformational information comprising at least one transformation to be performed on the virtual objects; and presents the images of the virtual objects in an augmented field of display based on the field of view of the image capturing device, by displaying the images of the virtual objects, as transformed using the transformational information, on the display device. 17. The computer device of claim 16 further comprising the processing device providing information about the second media stream to the server system. 18. (canceled) 19. The computer device of claim 16 wherein the display device comprises transparent eye pieces having disposed thereon active display elements for generating the images of the virtual objects. 20. (canceled) 21. (canceled) 22. The computer device of claim 16 wherein the information relating to the first media stream that is delivered to the server system comprises the first media stream. 23. The computer device of claim 16 wherein, when the processing device executes the computer-executable program code, the processing device further generates context information from the first media stream, and wherein the information relating to the first media stream that is delivered to the server system comprises the context information. 24. (canceled) 25. The computer-implemented method of claim 1 wherein the content analysis of the captured media stream comprises image processing. 26. The computer-implemented method of claim 1 wherein the result generated from the content analysis comprises one or more of object detection information and facial recognition information. 27. The computer-implemented method of claim 1 wherein the second context information further comprises data representative of a user interaction with the user device.
A system to facilitate AR processing includes receiving captured media from a user device and context information relating to media that is being delivered to a receiving device. The system may use the media being delivered with the captured media to generate one or more virtual objects. The user device may augment a user's view of reality that is reflected in the captured media by overlaying or otherwise incorporating the virtual objects in the user's view of reality.1. A computer-implemented method for augmented reality comprising a computer performing: accessing first context information that is based on content in a delivered media stream that is being delivered to a receiving device; accessing second context information that is based on content in a captured media stream, the second context information representative of at least one of: a physical location of a user device, a physical object within a field of view of the user device, and a result generated from a content analysis of the captured media stream; determining a first virtual object using at least the first context information, wherein the first virtual object is based on the content in the delivered media stream; determining, using at least the second context information, and based at least on the content in the captured media stream, transformational information comprising at least one transformation to be performed on the first virtual object; and providing, to the user device, information representative of the first virtual object and the transformational information, for enabling the user device to display an augmented field of view based on the field of view, wherein the field of view of the user device is augmented with one or more images of the first virtual object by rendering the one or more images of the first virtual object using the information representative of the first virtual object as transformed using the transformational information. 2. (canceled) 3. The computer-implemented method of claim 1 wherein the first virtual object is a representation of an object that appears in the delivered media stream. 4. The computer-implemented method of claim 1 wherein the first virtual object does not appear in the delivered media stream but is related to one or more objects that appear in the delivered media stream. 5. The computer-implemented method of claim 1 wherein the augmented field of view comprises a presentation of the captured media stream on the user device, wherein the presentation of the captured media stream includes the one or more images of the first virtual object rendered using the information representative of the first virtual object and the transformational information. 6. The computer-implemented method of claim 1 wherein the field of view of the neighborhood is seen using transparent eye pieces having active display elements disposed thereon to render the one or more images of the first virtual object. 7. The computer-implemented method of claim 1 wherein the captured media stream is associated with a timeline, the method further comprising determining a first time along the timeline, wherein the field of view of the neighborhood seen using the user device includes the one or more images of the first virtual object rendered at a time that is based on the first time. 8. (canceled) 9. The computer-implemented method of claim 1 wherein determining a first virtual object further includes determining the first virtual object using the second context information. 10. The computer-implemented method of claim 1 wherein determining a first virtual object is further based on events identified in the content of the delivered media stream or events identified in the content of the captured media stream. 11. (canceled) 12. The computer-implemented method of claim 1 further comprising obtaining the information representative of the first virtual object from one or more of: the content in the delivered media stream, a data store of images, and a computer-generated representation of the first virtual object. 13. The computer-implemented method of claim 12 wherein the information representative of the first virtual object is an image or audio that is output on the user device. 14. The computer-implemented method of claim 1 wherein accessing the first context information comprises receiving the first context information from a media server that is delivering the delivered media stream to the receiving device. 15. The computer-implemented method of claim 1 wherein accessing the second context information comprises receiving the second context information from the user device. 16. A computer device comprising: a processing device; a non-transitory memory having stored thereon computer-executable program code; a display device; and an image capturing device, wherein, when the processing device executes the computer-executable program code, the processing device: controls the image capturing device to generate a first media stream; receives the first media stream from the image capturing device; provides information relating to the first media stream to a server system separate from the computer device, the information representative of at least one of: a physical location of the image capturing device, a physical object within a field of view of the image capturing device, and a result generated from a content analysis of the first media stream; receives from the server system augmented reality (AR) data representative of virtual objects identified based on content in the first media stream and on content in a second media stream different from the first media stream; generates images of the virtual objects using the AR data and using transformational information comprising at least one transformation to be performed on the virtual objects; and presents the images of the virtual objects in an augmented field of display based on the field of view of the image capturing device, by displaying the images of the virtual objects, as transformed using the transformational information, on the display device. 17. The computer device of claim 16 further comprising the processing device providing information about the second media stream to the server system. 18. (canceled) 19. The computer device of claim 16 wherein the display device comprises transparent eye pieces having disposed thereon active display elements for generating the images of the virtual objects. 20. (canceled) 21. (canceled) 22. The computer device of claim 16 wherein the information relating to the first media stream that is delivered to the server system comprises the first media stream. 23. The computer device of claim 16 wherein, when the processing device executes the computer-executable program code, the processing device further generates context information from the first media stream, and wherein the information relating to the first media stream that is delivered to the server system comprises the context information. 24. (canceled) 25. The computer-implemented method of claim 1 wherein the content analysis of the captured media stream comprises image processing. 26. The computer-implemented method of claim 1 wherein the result generated from the content analysis comprises one or more of object detection information and facial recognition information. 27. The computer-implemented method of claim 1 wherein the second context information further comprises data representative of a user interaction with the user device.
2,600
10,864
10,864
14,609,103
2,626
A system and method for facilitating leveraging computing resources to convey or otherwise illustrate information. An example method includes receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first layout on the first display in response to the signal; and generating instructions for a second user interface layout for presentation on a second display that is larger than the first display, wherein content of the second layout is coordinated with content of the first layout, which are associated with a software application, and wherein the second layout includes one or more additional visual features relative to the first layout.
1. A method for coordinating the display of an application across multiple display screens in a computing environment, the computing environment including one or more computing devices in communication with a software application, wherein the software provides displayable information accessible to the one or more computing devices, a computing device of the one or more computing devices executing the steps of the method, the method comprising: receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first user interface display screen on the first display in response to the signal; and generating instructions for a second user interface display screen for presentation on a second display that is larger than the first display, wherein the second user interface display screen is coordinated with the first user interface display screen, wherein the first user interface display screen and the second user interface display screen are associated with a software application, and wherein the second user interface display screen includes one or more additional features relative to the first user interface display screen. 2. The method of claim 1, further including: employing a client device to determine a difference between a first computing resource and a second computing resource; obtaining a first set of information and a second set of information based on the difference, wherein the second set of information is augmented relative to the first set of information; generating a first set of computer instructions adapted to enable a first computing resource to convey the first set of information; providing a second set of computer instructions adapted to enable a second computing resource to convey a second set of information; coordinating delivery of the first set of information and the second set of information to the first computing resource and the second computing resource, respectively. 3. The method of claim 2, wherein the first computing resource includes the first display, and the second computing resource includes the second display. 4. The method of claim 3, wherein the difference represents a difference in viewable display areas between the first display and the second display. 5. The method of claim 3, wherein the first set of information and the second set of information include overlapping visual information. 6. The method of claim 5, wherein the client device includes an appliance, and wherein the appliance is in communication with the first computing resource and the second computing resource. 7. The method of claim 6, wherein the appliance includes a set-top box, and wherein the first computing resource includes a mobile device display. 8. The method of claim 5, wherein the client device includes a mobile device that is adapted to transmit rendering instructions to the second display, wherein the rendering instructions are adapted to leverage additional display area of the second display relative to the display area of the first display. 9. The method of claim 2, wherein the first computer instructions and the second computer instructions include rendering instructions derived from output of a single software application. 10. The method of claim ,2 wherein the first computing resource includes the first display, and wherein the second computing resource includes the second display that is larger than the first display, and wherein the first set of information includes visual information adapted for presentation of a first user interface display screen on the first display, and wherein the second set of information includes visual information adapted for presentation on a second user interface display screen on the second display. 11. The method of claim 10, wherein the second user interface display screen represents a reformatted version of the first user interface display screen. 12. The method of claim 10, wherein the second user interface display screen includes information present in the first user interface display screen that is integrated with additional information not present in the first user interface display screen. 13. The method of claim 10, wherein the second user interface display screen includes one or more additional user interface controls relative to the first user interface display screen. 14. The method of claim 2, wherein the first computing resource includes a first speaker system, and wherein the second computing resource includes a second speaker system. 15. The method of claim 14, wherein the first set of information includes information contained in a first audio signal tailored for use by the first computing resource, and wherein the second set of information includes information contained in a second audio signal tailored for use by the second computing resource. 16. The method of claim 15, wherein the second speaker system includes additional functionality relative to the first speaker system, and wherein the additional functionality is adapted to use additional information contained in the second audio signal relative to the first audio signal. 17. The method of claim 2, wherein the first computing resource includes a mobile device display showing the first user interface display screen, and wherein the second computing resource includes a projector adapted to project the second user interface display screen. 18. The method of claim 17, wherein the first user interface display screen is characterized by a first layout, and wherein the second user interface display screen is characterized by a second layout that includes one or more user interface controls not present in the first user interface display screen. 19. An apparatus for coordinating the display of an application across multiple display screens in a computing environment, the computing environment including one or more computing devices in communication with a software application, wherein the software application executes software, wherein the software provides displayable information accessible to the one or more computing devices, the one or more computing devices configured to perform the following acts: receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first user interface display screen on the first display in response to the signal; and generating instructions for a second user interface display screen for presentation on a second display that is larger than the first display, wherein the second user interface display screen is coordinated with the first user interface display screen, wherein the first user interface display screen and the second user interface display screen are associated with an application, and wherein the second user interface display screen includes one or more additional visual features or functionality relative to the first user interface display screen. 20. A tangible storage medium including instructions executable by one or more servers of a server system for coordinating the display of an application across multiple display screens in a computing environment, the computing environment including one or more computing devices in communication with a software application, wherein the software application executes software, wherein the software provides displayable information accessible to the one or more computing devices, the tangible storage medium including instructions for: receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first user interface display screen on the first display in response to the signal; and generating instructions for a second user interface display screen for presentation on a second display that is larger than the first display, wherein the second user interface display screen is coordinated with the first user interface display screen, wherein the first user interface display screen and the second user interface display screen are associated with an application, and wherein the second user interface display screen includes one or more additional visual features or functionality relative to the first user interface display screen.
A system and method for facilitating leveraging computing resources to convey or otherwise illustrate information. An example method includes receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first layout on the first display in response to the signal; and generating instructions for a second user interface layout for presentation on a second display that is larger than the first display, wherein content of the second layout is coordinated with content of the first layout, which are associated with a software application, and wherein the second layout includes one or more additional visual features relative to the first layout.1. A method for coordinating the display of an application across multiple display screens in a computing environment, the computing environment including one or more computing devices in communication with a software application, wherein the software provides displayable information accessible to the one or more computing devices, a computing device of the one or more computing devices executing the steps of the method, the method comprising: receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first user interface display screen on the first display in response to the signal; and generating instructions for a second user interface display screen for presentation on a second display that is larger than the first display, wherein the second user interface display screen is coordinated with the first user interface display screen, wherein the first user interface display screen and the second user interface display screen are associated with a software application, and wherein the second user interface display screen includes one or more additional features relative to the first user interface display screen. 2. The method of claim 1, further including: employing a client device to determine a difference between a first computing resource and a second computing resource; obtaining a first set of information and a second set of information based on the difference, wherein the second set of information is augmented relative to the first set of information; generating a first set of computer instructions adapted to enable a first computing resource to convey the first set of information; providing a second set of computer instructions adapted to enable a second computing resource to convey a second set of information; coordinating delivery of the first set of information and the second set of information to the first computing resource and the second computing resource, respectively. 3. The method of claim 2, wherein the first computing resource includes the first display, and the second computing resource includes the second display. 4. The method of claim 3, wherein the difference represents a difference in viewable display areas between the first display and the second display. 5. The method of claim 3, wherein the first set of information and the second set of information include overlapping visual information. 6. The method of claim 5, wherein the client device includes an appliance, and wherein the appliance is in communication with the first computing resource and the second computing resource. 7. The method of claim 6, wherein the appliance includes a set-top box, and wherein the first computing resource includes a mobile device display. 8. The method of claim 5, wherein the client device includes a mobile device that is adapted to transmit rendering instructions to the second display, wherein the rendering instructions are adapted to leverage additional display area of the second display relative to the display area of the first display. 9. The method of claim 2, wherein the first computer instructions and the second computer instructions include rendering instructions derived from output of a single software application. 10. The method of claim ,2 wherein the first computing resource includes the first display, and wherein the second computing resource includes the second display that is larger than the first display, and wherein the first set of information includes visual information adapted for presentation of a first user interface display screen on the first display, and wherein the second set of information includes visual information adapted for presentation on a second user interface display screen on the second display. 11. The method of claim 10, wherein the second user interface display screen represents a reformatted version of the first user interface display screen. 12. The method of claim 10, wherein the second user interface display screen includes information present in the first user interface display screen that is integrated with additional information not present in the first user interface display screen. 13. The method of claim 10, wherein the second user interface display screen includes one or more additional user interface controls relative to the first user interface display screen. 14. The method of claim 2, wherein the first computing resource includes a first speaker system, and wherein the second computing resource includes a second speaker system. 15. The method of claim 14, wherein the first set of information includes information contained in a first audio signal tailored for use by the first computing resource, and wherein the second set of information includes information contained in a second audio signal tailored for use by the second computing resource. 16. The method of claim 15, wherein the second speaker system includes additional functionality relative to the first speaker system, and wherein the additional functionality is adapted to use additional information contained in the second audio signal relative to the first audio signal. 17. The method of claim 2, wherein the first computing resource includes a mobile device display showing the first user interface display screen, and wherein the second computing resource includes a projector adapted to project the second user interface display screen. 18. The method of claim 17, wherein the first user interface display screen is characterized by a first layout, and wherein the second user interface display screen is characterized by a second layout that includes one or more user interface controls not present in the first user interface display screen. 19. An apparatus for coordinating the display of an application across multiple display screens in a computing environment, the computing environment including one or more computing devices in communication with a software application, wherein the software application executes software, wherein the software provides displayable information accessible to the one or more computing devices, the one or more computing devices configured to perform the following acts: receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first user interface display screen on the first display in response to the signal; and generating instructions for a second user interface display screen for presentation on a second display that is larger than the first display, wherein the second user interface display screen is coordinated with the first user interface display screen, wherein the first user interface display screen and the second user interface display screen are associated with an application, and wherein the second user interface display screen includes one or more additional visual features or functionality relative to the first user interface display screen. 20. A tangible storage medium including instructions executable by one or more servers of a server system for coordinating the display of an application across multiple display screens in a computing environment, the computing environment including one or more computing devices in communication with a software application, wherein the software application executes software, wherein the software provides displayable information accessible to the one or more computing devices, the tangible storage medium including instructions for: receiving a signal from a user input mechanism of a first device, the first device characterized by the user input mechanism in communication with a first display; displaying a first user interface display screen on the first display in response to the signal; and generating instructions for a second user interface display screen for presentation on a second display that is larger than the first display, wherein the second user interface display screen is coordinated with the first user interface display screen, wherein the first user interface display screen and the second user interface display screen are associated with an application, and wherein the second user interface display screen includes one or more additional visual features or functionality relative to the first user interface display screen.
2,600
10,865
10,865
16,517,714
2,632
A method and apparatus for providing a warning about a suspect vehicle is provided. During operation automatic-license-plate-reading (ALPR) circuitry will scan a license plate and determine a current location of an owner of a vehicle. If the current location of the owner of the vehicle, and a current location of the vehicle differ, a warning is provided to the user of the ALPR circuitry. In an alternate embodiment, if the current location of all individuals who reside with the owner of the vehicle, and a current location of the vehicle differ, a warning is provided to the user of the ALPR circuitry.
1. An apparatus comprising: a GPS receiver configured to determine a location of the apparatus; a camera configured to capture an image of a license plate; logic circuitry configured to receive a location of a smart device for an owner associated with the license plate and provide a warning if the location of the smart device differs from the location of the apparatus by a predetermined amount. 2. The apparatus of claim 1 further comprising a network interface configured to provide a license-plate number to a server and receive the location of the smart device from the server. 3. The apparatus of claim 1 further comprising a graphical-user interface (GUI) coupled to the logic circuitry, the GUI configured to output the warning. 4. The apparatus of claim 1 wherein the warning comprises an audible warning. 5. The apparatus of claim 1 wherein the warning comprises a visible warning. 6. An apparatus comprising: a GPS receiver configured to determine a location of the apparatus; a camera configured to capture an image of a license-plate number; a network interface configured to provide the license-plate number to a server and receive a location of a smart device for an owner associated with the license-plate number; logic circuitry configured to receive a location of a smart device for an owner associated with the license plate and output a warning if the location of the smart device differs from the location of the apparatus by a predetermined amount; a graphical-user interface (GUI) coupled to the logic circuitry, the GUI configured to output the warning, wherein the warning comprises an audible and/or a visible warning. 7. A method comprising the steps of: determining a location of a first vehicle, wherein the first vehicle comprises an automatic-license plate reader (ALPR); capturing an image of a license plate with the ALPR, the license plate attached to a second vehicle; receiving a location of a smart device of an owner of the second vehicle; providing a warning to a user if the location of the smart device differs from the location of the first vehicle by a predetermined amount. 8. The method of claim 7 further comprising the step providing a license-plate number to a server and the step of receiving the location of the smart device comprises the step of receiving the location from the server. 9. The method of claim 7 wherein the step of providing the warning comprises the step of providing an audible and/or a visible warning. 10. The method of claim 7 wherein the location of the ALPR serves as a proxy for the location of the first vehicle.
A method and apparatus for providing a warning about a suspect vehicle is provided. During operation automatic-license-plate-reading (ALPR) circuitry will scan a license plate and determine a current location of an owner of a vehicle. If the current location of the owner of the vehicle, and a current location of the vehicle differ, a warning is provided to the user of the ALPR circuitry. In an alternate embodiment, if the current location of all individuals who reside with the owner of the vehicle, and a current location of the vehicle differ, a warning is provided to the user of the ALPR circuitry.1. An apparatus comprising: a GPS receiver configured to determine a location of the apparatus; a camera configured to capture an image of a license plate; logic circuitry configured to receive a location of a smart device for an owner associated with the license plate and provide a warning if the location of the smart device differs from the location of the apparatus by a predetermined amount. 2. The apparatus of claim 1 further comprising a network interface configured to provide a license-plate number to a server and receive the location of the smart device from the server. 3. The apparatus of claim 1 further comprising a graphical-user interface (GUI) coupled to the logic circuitry, the GUI configured to output the warning. 4. The apparatus of claim 1 wherein the warning comprises an audible warning. 5. The apparatus of claim 1 wherein the warning comprises a visible warning. 6. An apparatus comprising: a GPS receiver configured to determine a location of the apparatus; a camera configured to capture an image of a license-plate number; a network interface configured to provide the license-plate number to a server and receive a location of a smart device for an owner associated with the license-plate number; logic circuitry configured to receive a location of a smart device for an owner associated with the license plate and output a warning if the location of the smart device differs from the location of the apparatus by a predetermined amount; a graphical-user interface (GUI) coupled to the logic circuitry, the GUI configured to output the warning, wherein the warning comprises an audible and/or a visible warning. 7. A method comprising the steps of: determining a location of a first vehicle, wherein the first vehicle comprises an automatic-license plate reader (ALPR); capturing an image of a license plate with the ALPR, the license plate attached to a second vehicle; receiving a location of a smart device of an owner of the second vehicle; providing a warning to a user if the location of the smart device differs from the location of the first vehicle by a predetermined amount. 8. The method of claim 7 further comprising the step providing a license-plate number to a server and the step of receiving the location of the smart device comprises the step of receiving the location from the server. 9. The method of claim 7 wherein the step of providing the warning comprises the step of providing an audible and/or a visible warning. 10. The method of claim 7 wherein the location of the ALPR serves as a proxy for the location of the first vehicle.
2,600
10,866
10,866
15,919,898
2,611
A method and system for safety enhancement. A movement of a user of a mobile display system that displays augmented reality information is measured. Movement information about the user is relayed from the movement measured for the user. A speed at which the user is moving with respect to a structure using the movement information and a three-dimensional model of the structure is determined. A visual display of the augmented reality information on the mobile display system is deactivated when the speed at which the user is moving with respect to the structure meets a deactivation condition.
1. A safety enhancement system comprising: a sensor system configured to measure a movement of a user of a mobile display system that displays augmented reality information and relay movement information about the user; a three-dimensional model of a structure; and a safety controller in communication with the sensor system, wherein the safety controller is configured to receive the movement information from the sensor system; determine a velocity at which the user is moving with respect to the structure using the movement information and the three-dimensional model of the structure; and deactivate a visual display of the augmented reality information on the mobile display system when a speed at which the user is moving with respect to the structure meets a deactivation condition. 2. The safety enhancement system of claim 1, wherein the safety controller determines a location of the user with respect to the structure; and deactivates the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving and the location of the user with respect to the structure meets the deactivation condition. 3. The safety enhancement system of claim 1, wherein in deactivating the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition, the safety controller causes a blank display on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition. 4. The safety enhancement system of claim 1, wherein in deactivating the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition, the safety controller removes the visual display of the augmented reality information while continuing to display a live view when the speed at which the user is moving with respect to the structure meets the deactivation condition. 5. The safety enhancement system of claim 4, wherein the safety controller is configured to resume displaying the visual display of the augmented reality information when the speed at which the user is moving with respect to the structure no longer meets the deactivation condition. 6. The safety enhancement system of claim 1, wherein the sensor system is configured to measure a position of the user and generate position information, wherein the safety controller is configured to determine whether the user is in an undesired posture using the position information; determine whether the user has been in the undesired posture for a period of time that is greater than a posture threshold for the undesired posture; and generate a warning. 7. The safety enhancement system of claim 6, wherein the safety controller is configured to turn off the mobile display system if the user does not move out of the undesired posture after a selected period of time. 8. The safety enhancement system of claim 1, wherein the sensor system is configured to measure a position of the user and generate position information from the position measured for the user, and wherein the safety controller is configured to determine the position of the user with respect to the structure using the position information and the three-dimensional model; identify a number of hazardous locations for the structure using the three-dimensional model; and present an alert for a hazardous location in the number of hazardous locations when the user is within an undesired distance from the hazardous location using the position of the user with respect to the structure and the three-dimensional model. 9. The safety enhancement system of claim 1, wherein the sensor system is selected from at least one of an accelerometer, a gyroscope, a magnetometer, a global positioning system device, or a camera. 10. The safety enhancement system of claim 1, wherein the mobile display system is selected from a group comprising a head-mounted display, smart glasses, a mobile phone, and a tablet computer. 11. The safety enhancement system of claim 1, wherein the structure is selected from one of a mobile platform, a stationary platform, a land-based structure, an aquatic-based structure, a space-based structure, an aircraft, a surface ship, a tank, a personnel carrier, a train, a spacecraft, a space station, a satellite, a submarine, an automobile, a power plant, a bridge, a dam, a house, a manufacturing facility, a manufacturing cell, an aircraft structure, a fuselage section, a wing, a wing box, an engine housing, and an aircraft in an uncompleted state. 12. A method for safety enhancement comprising: receiving, by a safety controller, movement information for a user of a mobile display system that displays augmented reality information; determining, by the safety controller, a speed at which the user is moving with respect to a structure using the movement information and a three-dimensional model of the structure; and deactivating, by the safety controller, a visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets a deactivation condition. 13. The method of claim 12 further comprising: measuring, by a sensor system, movement of the user of the mobile display system that displays the augmented reality information; and relaying, by the sensor system to the safety controller, the movement information about the user from measuring the movement for the user. 14. The method of claim 12 further comprising: determining, by the safety controller, a location of the user with respect to the structure, wherein deactivating the visual display of the augmented reality information on the mobile display system comprises: deactivating, by the safety controller, the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving and a location of the user with respect to the structure meets the deactivation condition. 15. The method of claim 12, wherein deactivating, by the safety controller, the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition comprises: causing, by the safety controller, a blank display on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition. 16. The method of claim 12, wherein deactivating, by the safety controller, the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition comprises: removing, by the safety controller, the visual display of the augmented reality information while continuing to display a live view when the speed at which the user is moving with respect to the structure meets the deactivation condition. 17. The method of claim 16 further comprising: resuming, by the safety controller, displaying of the visual display of the augmented reality information when the speed indicates that the speed at which the user is moving with respect to the structure does not meet the deactivation condition. 18. The method of claim 12 further comprising: measuring, by a sensor system, a position of the user; relaying, by the sensor system, position information from the position measured for the user to the safety controller; determining, by the safety controller, whether the user is in an undesired posture using the position information in the undesired posture for a period of time that is greater than a posture threshold for the undesired posture; and generating a warning. 19. The method of claim 18 further comprising: turning off, by the safety controller, the mobile display system if the user does not move out of the undesired posture after a selected period of time. 20. The method of claim 12 further comprising: measuring, by a sensor system, a position of the user; relaying, by the sensor system, position information from measuring the position of the user to the safety controller; determining, by the safety controller, the position of the user with respect to the structure using the position information and the three-dimensional model; identifying, by the safety controller, a number of hazardous locations for the structure using the three-dimensional model; and generating, by the safety controller, an alert for a hazardous location in the number of hazardous locations when the user is within an undesired distance from the hazardous location using the position of the user with respect to the structure and the three-dimensional model. 21. The safety enhancement system of claim 1, wherein the deactivation condition requires the speed at which the user is moving with respect to the structure to exceed a speed threshold. 22. The method of claim 12, wherein the deactivation condition requires the speed at which the user is moving with respect to the structure to exceed a speed threshold.
A method and system for safety enhancement. A movement of a user of a mobile display system that displays augmented reality information is measured. Movement information about the user is relayed from the movement measured for the user. A speed at which the user is moving with respect to a structure using the movement information and a three-dimensional model of the structure is determined. A visual display of the augmented reality information on the mobile display system is deactivated when the speed at which the user is moving with respect to the structure meets a deactivation condition.1. A safety enhancement system comprising: a sensor system configured to measure a movement of a user of a mobile display system that displays augmented reality information and relay movement information about the user; a three-dimensional model of a structure; and a safety controller in communication with the sensor system, wherein the safety controller is configured to receive the movement information from the sensor system; determine a velocity at which the user is moving with respect to the structure using the movement information and the three-dimensional model of the structure; and deactivate a visual display of the augmented reality information on the mobile display system when a speed at which the user is moving with respect to the structure meets a deactivation condition. 2. The safety enhancement system of claim 1, wherein the safety controller determines a location of the user with respect to the structure; and deactivates the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving and the location of the user with respect to the structure meets the deactivation condition. 3. The safety enhancement system of claim 1, wherein in deactivating the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition, the safety controller causes a blank display on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition. 4. The safety enhancement system of claim 1, wherein in deactivating the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition, the safety controller removes the visual display of the augmented reality information while continuing to display a live view when the speed at which the user is moving with respect to the structure meets the deactivation condition. 5. The safety enhancement system of claim 4, wherein the safety controller is configured to resume displaying the visual display of the augmented reality information when the speed at which the user is moving with respect to the structure no longer meets the deactivation condition. 6. The safety enhancement system of claim 1, wherein the sensor system is configured to measure a position of the user and generate position information, wherein the safety controller is configured to determine whether the user is in an undesired posture using the position information; determine whether the user has been in the undesired posture for a period of time that is greater than a posture threshold for the undesired posture; and generate a warning. 7. The safety enhancement system of claim 6, wherein the safety controller is configured to turn off the mobile display system if the user does not move out of the undesired posture after a selected period of time. 8. The safety enhancement system of claim 1, wherein the sensor system is configured to measure a position of the user and generate position information from the position measured for the user, and wherein the safety controller is configured to determine the position of the user with respect to the structure using the position information and the three-dimensional model; identify a number of hazardous locations for the structure using the three-dimensional model; and present an alert for a hazardous location in the number of hazardous locations when the user is within an undesired distance from the hazardous location using the position of the user with respect to the structure and the three-dimensional model. 9. The safety enhancement system of claim 1, wherein the sensor system is selected from at least one of an accelerometer, a gyroscope, a magnetometer, a global positioning system device, or a camera. 10. The safety enhancement system of claim 1, wherein the mobile display system is selected from a group comprising a head-mounted display, smart glasses, a mobile phone, and a tablet computer. 11. The safety enhancement system of claim 1, wherein the structure is selected from one of a mobile platform, a stationary platform, a land-based structure, an aquatic-based structure, a space-based structure, an aircraft, a surface ship, a tank, a personnel carrier, a train, a spacecraft, a space station, a satellite, a submarine, an automobile, a power plant, a bridge, a dam, a house, a manufacturing facility, a manufacturing cell, an aircraft structure, a fuselage section, a wing, a wing box, an engine housing, and an aircraft in an uncompleted state. 12. A method for safety enhancement comprising: receiving, by a safety controller, movement information for a user of a mobile display system that displays augmented reality information; determining, by the safety controller, a speed at which the user is moving with respect to a structure using the movement information and a three-dimensional model of the structure; and deactivating, by the safety controller, a visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets a deactivation condition. 13. The method of claim 12 further comprising: measuring, by a sensor system, movement of the user of the mobile display system that displays the augmented reality information; and relaying, by the sensor system to the safety controller, the movement information about the user from measuring the movement for the user. 14. The method of claim 12 further comprising: determining, by the safety controller, a location of the user with respect to the structure, wherein deactivating the visual display of the augmented reality information on the mobile display system comprises: deactivating, by the safety controller, the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving and a location of the user with respect to the structure meets the deactivation condition. 15. The method of claim 12, wherein deactivating, by the safety controller, the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition comprises: causing, by the safety controller, a blank display on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition. 16. The method of claim 12, wherein deactivating, by the safety controller, the visual display of the augmented reality information on the mobile display system when the speed at which the user is moving with respect to the structure meets the deactivation condition comprises: removing, by the safety controller, the visual display of the augmented reality information while continuing to display a live view when the speed at which the user is moving with respect to the structure meets the deactivation condition. 17. The method of claim 16 further comprising: resuming, by the safety controller, displaying of the visual display of the augmented reality information when the speed indicates that the speed at which the user is moving with respect to the structure does not meet the deactivation condition. 18. The method of claim 12 further comprising: measuring, by a sensor system, a position of the user; relaying, by the sensor system, position information from the position measured for the user to the safety controller; determining, by the safety controller, whether the user is in an undesired posture using the position information in the undesired posture for a period of time that is greater than a posture threshold for the undesired posture; and generating a warning. 19. The method of claim 18 further comprising: turning off, by the safety controller, the mobile display system if the user does not move out of the undesired posture after a selected period of time. 20. The method of claim 12 further comprising: measuring, by a sensor system, a position of the user; relaying, by the sensor system, position information from measuring the position of the user to the safety controller; determining, by the safety controller, the position of the user with respect to the structure using the position information and the three-dimensional model; identifying, by the safety controller, a number of hazardous locations for the structure using the three-dimensional model; and generating, by the safety controller, an alert for a hazardous location in the number of hazardous locations when the user is within an undesired distance from the hazardous location using the position of the user with respect to the structure and the three-dimensional model. 21. The safety enhancement system of claim 1, wherein the deactivation condition requires the speed at which the user is moving with respect to the structure to exceed a speed threshold. 22. The method of claim 12, wherein the deactivation condition requires the speed at which the user is moving with respect to the structure to exceed a speed threshold.
2,600
10,867
10,867
15,117,702
2,641
There is provided a method of operating a network node in a first network that is operating according to a first radio access technology, RAT, the network node controlling a first cell in the first network, the method comprising receiving information for a terminal device served by the first cell ( 913; 1113 ), the received information corresponding to information for the terminal device that was provided to the terminal device from another cell of the first network, the information being for use in a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT.
1. A method of operating a network node in a first network that is operating according to a first radio access technology, RAT, the network node controlling a first cell in the first network, the method comprising: receiving information for a terminal device served by the first cell, the received information corresponding to information for the terminal device that was provided to the terminal device from another cell of the first network, the information being for use in a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT. 2. The method as defined in claim 1, wherein the step of receiving comprises receiving the information from a network node that controls said another cell of the first network. 3. The method as defined in claim 2, wherein the step of receiving comprises receiving the information over an interface between the network node that is controlling the first cell and the network node that is controlling said another cell. 4. The method as defined in claim 3, wherein the interface is an X2 interface. 5.-22. (canceled) 23. A network node for use in a first network that is operating according to a first radio access technology, RAT, the network node supporting a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the network node controlling a first cell in the first network, the network node comprising: a processing circuit and interface circuitry that are configured to: receive information for a terminal device served by the first cell, the received information corresponding to information for the terminal device that was provided to the terminal device from another cell of the first network, the information being for use in the network interworking feature. 24. The network node as defined in claim 23, wherein the processing circuit and interface circuitry are configured to receive the information from a network node that controls said another cell of the first network. 25. The network node as defined in claim 24, wherein the processing circuit and interface circuitry are configured to receive the information over an interface between the network node that is controlling the first cell and the network node that is controlling said another cell. 26. The network node as defined in claim 25, wherein the interface is an X2 interface. 27.-44. (canceled) 45. A method of operating a network node in a first network that is operating according to a first radio access technology, RAT, the network node controlling a second cell in the first network, the method comprising: sending information for a terminal device that was used in the second cell to a network node controlling a first cell the information being for use in a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT. 46. The method as defined in claim 45, wherein the step of sending comprises sending the information over an interface between the network node that is controlling the first cell and the network node that is controlling the second cell. 47. The method as defined in claim 45, wherein the step of sending comprises sending the information from the network node that is controlling the second cell to the network node that is controlling the first cell via another network node in the first network. 48. The method as defined in claim 45, the method further comprising the step of: receiving a request for the information for the terminal device from the network node that is controlling the first cell. 49. The method as defined in claim 48, wherein the step of receiving a request comprises receiving the request for the information over an interface between the network node that is controlling the first cell and the network node that is controlling the second cell. 50.-52. (canceled) 53. A network node for use in a first network that is operating according to a first radio access technology, RAT, the network node supporting a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the network node controlling a second cell in the first network, the network node comprising: a processing circuit and interface circuitry that are configured to: send information for a terminal device that was used in the second cell to a network node controlling a first cell, the information being for use in the network interworking feature. 54. The network node as defined in claim 53, wherein the processing circuit and interface circuitry are configured to send the information over an interface between the network node that is controlling the first cell and the network node that is controlling the second cell. 55. The network node as defined in claim 53, wherein the processing circuit and interface circuitry are configured to send the information from the network node that is controlling the second cell to the network node that is controlling the first cell via another network node in the first network. 56. The network node as defined in claim 53, wherein the processing circuit and interface circuitry are further configured to receive a request for the information for the terminal device from the network node that is controlling the first cell. 57.-60. (canceled) 61. A method of operating a terminal device in a first network that is operating according to a first radio access technology, RAT, the terminal device supporting and operating according to a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the method comprising: sending information for the terminal device to a network node controlling a first cell in the first network, the information having been previously used by the terminal device in another cell in the first network, the information being for use in the network interworking feature. 62. The method as defined in claim 61, the method further comprising the step of: receiving a request for the information for the terminal device from the network node that is controlling the first cell. 63. The method as defined in claim 61, the method further comprising the step of: receiving an indication of the suitability of the information sent to the network node from the network node. 64. The method as defined in claim 63, the method further comprising the step of: discarding the information and/or ceasing to act according to the information if the received indication indicates that the information is unsuitable. 65. The method as defined in claim 63, wherein the step of receiving an indication of the suitability of the information to the terminal device comprises receiving an explicit indication from the network node indicating the suitability of the information. 66.-73. (canceled) 74. A terminal device for use in a first network that is operating according to a first radio access technology, RAT, the terminal device supporting and operating according to a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the terminal device comprising: a processing circuit and transceiver circuitry that are configured to: send information for the terminal device to a network node controlling a first cell in the first network, the information having been previously used by the terminal device in another cell in the first network, the information being for use in the network interworking feature. 75. The terminal device as defined in claim 74, wherein the processing circuit and transceiver circuitry are further configured to receive a request for the information for the terminal device from the network node that is controlling the first cell. 76. The terminal device as defined in claim 74, wherein the processing circuit and transceiver circuitry are configured to send the information during or following hand-in of the terminal device to the first cell from another cell in the first network. 77. The terminal device as defined in claim 74, wherein the processing circuit and transceiver circuitry are configured to send the information following a transition of the terminal device from an idle mode to a connected mode. 78. The terminal device as defined in claim 74, wherein the information is traffic steering information and/or access network selection information. 79. (canceled)
There is provided a method of operating a network node in a first network that is operating according to a first radio access technology, RAT, the network node controlling a first cell in the first network, the method comprising receiving information for a terminal device served by the first cell ( 913; 1113 ), the received information corresponding to information for the terminal device that was provided to the terminal device from another cell of the first network, the information being for use in a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT.1. A method of operating a network node in a first network that is operating according to a first radio access technology, RAT, the network node controlling a first cell in the first network, the method comprising: receiving information for a terminal device served by the first cell, the received information corresponding to information for the terminal device that was provided to the terminal device from another cell of the first network, the information being for use in a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT. 2. The method as defined in claim 1, wherein the step of receiving comprises receiving the information from a network node that controls said another cell of the first network. 3. The method as defined in claim 2, wherein the step of receiving comprises receiving the information over an interface between the network node that is controlling the first cell and the network node that is controlling said another cell. 4. The method as defined in claim 3, wherein the interface is an X2 interface. 5.-22. (canceled) 23. A network node for use in a first network that is operating according to a first radio access technology, RAT, the network node supporting a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the network node controlling a first cell in the first network, the network node comprising: a processing circuit and interface circuitry that are configured to: receive information for a terminal device served by the first cell, the received information corresponding to information for the terminal device that was provided to the terminal device from another cell of the first network, the information being for use in the network interworking feature. 24. The network node as defined in claim 23, wherein the processing circuit and interface circuitry are configured to receive the information from a network node that controls said another cell of the first network. 25. The network node as defined in claim 24, wherein the processing circuit and interface circuitry are configured to receive the information over an interface between the network node that is controlling the first cell and the network node that is controlling said another cell. 26. The network node as defined in claim 25, wherein the interface is an X2 interface. 27.-44. (canceled) 45. A method of operating a network node in a first network that is operating according to a first radio access technology, RAT, the network node controlling a second cell in the first network, the method comprising: sending information for a terminal device that was used in the second cell to a network node controlling a first cell the information being for use in a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT. 46. The method as defined in claim 45, wherein the step of sending comprises sending the information over an interface between the network node that is controlling the first cell and the network node that is controlling the second cell. 47. The method as defined in claim 45, wherein the step of sending comprises sending the information from the network node that is controlling the second cell to the network node that is controlling the first cell via another network node in the first network. 48. The method as defined in claim 45, the method further comprising the step of: receiving a request for the information for the terminal device from the network node that is controlling the first cell. 49. The method as defined in claim 48, wherein the step of receiving a request comprises receiving the request for the information over an interface between the network node that is controlling the first cell and the network node that is controlling the second cell. 50.-52. (canceled) 53. A network node for use in a first network that is operating according to a first radio access technology, RAT, the network node supporting a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the network node controlling a second cell in the first network, the network node comprising: a processing circuit and interface circuitry that are configured to: send information for a terminal device that was used in the second cell to a network node controlling a first cell, the information being for use in the network interworking feature. 54. The network node as defined in claim 53, wherein the processing circuit and interface circuitry are configured to send the information over an interface between the network node that is controlling the first cell and the network node that is controlling the second cell. 55. The network node as defined in claim 53, wherein the processing circuit and interface circuitry are configured to send the information from the network node that is controlling the second cell to the network node that is controlling the first cell via another network node in the first network. 56. The network node as defined in claim 53, wherein the processing circuit and interface circuitry are further configured to receive a request for the information for the terminal device from the network node that is controlling the first cell. 57.-60. (canceled) 61. A method of operating a terminal device in a first network that is operating according to a first radio access technology, RAT, the terminal device supporting and operating according to a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the method comprising: sending information for the terminal device to a network node controlling a first cell in the first network, the information having been previously used by the terminal device in another cell in the first network, the information being for use in the network interworking feature. 62. The method as defined in claim 61, the method further comprising the step of: receiving a request for the information for the terminal device from the network node that is controlling the first cell. 63. The method as defined in claim 61, the method further comprising the step of: receiving an indication of the suitability of the information sent to the network node from the network node. 64. The method as defined in claim 63, the method further comprising the step of: discarding the information and/or ceasing to act according to the information if the received indication indicates that the information is unsuitable. 65. The method as defined in claim 63, wherein the step of receiving an indication of the suitability of the information to the terminal device comprises receiving an explicit indication from the network node indicating the suitability of the information. 66.-73. (canceled) 74. A terminal device for use in a first network that is operating according to a first radio access technology, RAT, the terminal device supporting and operating according to a network interworking feature that enables and controls interworking between the first network and a network operating according to a second RAT, the terminal device comprising: a processing circuit and transceiver circuitry that are configured to: send information for the terminal device to a network node controlling a first cell in the first network, the information having been previously used by the terminal device in another cell in the first network, the information being for use in the network interworking feature. 75. The terminal device as defined in claim 74, wherein the processing circuit and transceiver circuitry are further configured to receive a request for the information for the terminal device from the network node that is controlling the first cell. 76. The terminal device as defined in claim 74, wherein the processing circuit and transceiver circuitry are configured to send the information during or following hand-in of the terminal device to the first cell from another cell in the first network. 77. The terminal device as defined in claim 74, wherein the processing circuit and transceiver circuitry are configured to send the information following a transition of the terminal device from an idle mode to a connected mode. 78. The terminal device as defined in claim 74, wherein the information is traffic steering information and/or access network selection information. 79. (canceled)
2,600
10,868
10,868
15,406,927
2,626
A display device includes a substrate including a first pixel area and a second pixel area, wherein the second pixel area is located at a side of the first pixel area, first pixels located in the first pixel area and connected to first scan lines, and second pixels located in the second pixel area and connected to second scan lines, wherein the first pixels and the second pixels include pixel rows extending in a first direction, and at least one of the second scan lines is inclined with respect to the first direction.
1. A display device, comprising: a substrate including a first pixel area and a second pixel area, wherein the second pixel area is located at a side of the first pixel area; first pixels located in the first pixel area and connected to first scan lines, and second pixels located in the second pixel area and connected to second scan lines, wherein the first pixels and the second pixels include pixel rows extending in a first direction, and at least one of the second scan lines is inclined with respect to the first direction. 2. The display device of claim 1, wherein the substrate further comprises: a first peripheral area located outside the first pixel area; a second peripheral area located outside the second pixel area; first scan stages located in the first peripheral area and connected to the first scan lines; and second scan stages located in the second peripheral area and connected to the second scan lines. 3. The display device of claim 2, wherein the first scan lines extend from output terminals of the first scan stages in parallel with the first direction. 4. The display device of claim 2, wherein the second peripheral area has a curved shape. 5. The display device of claim 1, wherein a number of pixels provided in pixel rows arranged in the second pixel area is smaller than a number of pixels provided in pixel rows arranged in the first pixel area. 6. The display device of claim 5, wherein pixel rows distant from the first pixel area, among the pixel rows arranged in the second pixel area, include a smaller number of pixels. 7. The display device of claim 1, wherein the second pixel area has a smaller area than the first pixel area and a corner portion of the second pixel area has a curved shape. 8. The display device of claim 2, wherein a position of a second scan stage output terminal along a second direction crossing the first direction is different from a position of a scan signal input terminal of each of the second pixels connected to the second scan stage through a corresponding second scan line. 9. A display device, comprising: a substrate including a first pixel area and a second pixel area, wherein the second pixel area is located at a side of the first pixel area; first pixels located in the first pixel area and connected to first scan lines; and second pixels located in the second pixel area and connected to second scan lines, wherein first pixel rows in which the first pixels are arranged extend in a first direction and are located in the first pixel area and second pixel rows in which the second pixels are arranged extend in the first direction and are located in the second pixel area, and lengths of second scan lines connected to outermost second pixels in the second pixel rows are greater than lengths of first scan lines connected to outermost first pixels in the first pixel rows. 10. The display device of claim 9, wherein the lengths of the second scan lines connected to the outermost second pixels in the second pixel rows increase as the second pixel rows get farther from the first pixel area. 11. The display device of claim 10, wherein the second pixel area has a smaller area than the first pixel area and has a corner portion with a curved shape. 12. The display device of claim 9, wherein pixel columns in which the first pixels and the second pixels are arranged in a second direction crossing the first direction are located in the first pixel area and the second pixel area, and data lines extending in the second direction are connected to each of the pixel columns. 13. The display device of claim 12, wherein the substrate further comprises: a first peripheral area located outside the first pixel area; a second peripheral area located outside the second pixel area; first scan stages located in the first peripheral area and connected to the first scan lines; and second scan stages located in the second peripheral area and connected to the second scan lines. 14. The display device of claim 13, wherein the second peripheral area has a curved shape. 15. The display device of claim 13, wherein overlapping areas between the second scan lines and the data lines are formed in a region between the second scan stages and the second pixels. 16. The display device of claim 15, wherein a plurality of first overlapping areas are formed between a first second scan line of the second scan lines and the data lines, a plurality of second overlapping areas are formed between a second second scan line of the second scan lines and the data lines, and the first second scan lines is closer to the first pixel area than the second second scan line. 17. The display device of claim 16, wherein a sum of areas of the second overlapping areas is greater than a sum of areas of the first overlapping areas. 18. The display device of claim 16, wherein a number of the first overlapping areas is smaller than a number of the second overlapping areas. 19. A display device, comprising: a substrate including a first pixel area and a second pixel area, wherein the second pixel area is adjacent to the first pixel area, wherein a corner of the second pixel area has a curved shape; the substrate further including a first peripheral area adjacent to the second pixel area and having a curved shape, wherein the first peripheral area includes a plurality of scan drivers, and the second pixel area includes a plurality of pixels arranged in pixel rows, wherein a distance from a scan driver adjacent to the first pixel area to a corresponding pixel is less than a distance from a scan driver farther away from the first pixel area to a corresponding pixel. 20. The display device of claim 19, wherein an amount of overlap between a signal line and a data line in the second pixel area is greater proceeding away from the first pixel area.
A display device includes a substrate including a first pixel area and a second pixel area, wherein the second pixel area is located at a side of the first pixel area, first pixels located in the first pixel area and connected to first scan lines, and second pixels located in the second pixel area and connected to second scan lines, wherein the first pixels and the second pixels include pixel rows extending in a first direction, and at least one of the second scan lines is inclined with respect to the first direction.1. A display device, comprising: a substrate including a first pixel area and a second pixel area, wherein the second pixel area is located at a side of the first pixel area; first pixels located in the first pixel area and connected to first scan lines, and second pixels located in the second pixel area and connected to second scan lines, wherein the first pixels and the second pixels include pixel rows extending in a first direction, and at least one of the second scan lines is inclined with respect to the first direction. 2. The display device of claim 1, wherein the substrate further comprises: a first peripheral area located outside the first pixel area; a second peripheral area located outside the second pixel area; first scan stages located in the first peripheral area and connected to the first scan lines; and second scan stages located in the second peripheral area and connected to the second scan lines. 3. The display device of claim 2, wherein the first scan lines extend from output terminals of the first scan stages in parallel with the first direction. 4. The display device of claim 2, wherein the second peripheral area has a curved shape. 5. The display device of claim 1, wherein a number of pixels provided in pixel rows arranged in the second pixel area is smaller than a number of pixels provided in pixel rows arranged in the first pixel area. 6. The display device of claim 5, wherein pixel rows distant from the first pixel area, among the pixel rows arranged in the second pixel area, include a smaller number of pixels. 7. The display device of claim 1, wherein the second pixel area has a smaller area than the first pixel area and a corner portion of the second pixel area has a curved shape. 8. The display device of claim 2, wherein a position of a second scan stage output terminal along a second direction crossing the first direction is different from a position of a scan signal input terminal of each of the second pixels connected to the second scan stage through a corresponding second scan line. 9. A display device, comprising: a substrate including a first pixel area and a second pixel area, wherein the second pixel area is located at a side of the first pixel area; first pixels located in the first pixel area and connected to first scan lines; and second pixels located in the second pixel area and connected to second scan lines, wherein first pixel rows in which the first pixels are arranged extend in a first direction and are located in the first pixel area and second pixel rows in which the second pixels are arranged extend in the first direction and are located in the second pixel area, and lengths of second scan lines connected to outermost second pixels in the second pixel rows are greater than lengths of first scan lines connected to outermost first pixels in the first pixel rows. 10. The display device of claim 9, wherein the lengths of the second scan lines connected to the outermost second pixels in the second pixel rows increase as the second pixel rows get farther from the first pixel area. 11. The display device of claim 10, wherein the second pixel area has a smaller area than the first pixel area and has a corner portion with a curved shape. 12. The display device of claim 9, wherein pixel columns in which the first pixels and the second pixels are arranged in a second direction crossing the first direction are located in the first pixel area and the second pixel area, and data lines extending in the second direction are connected to each of the pixel columns. 13. The display device of claim 12, wherein the substrate further comprises: a first peripheral area located outside the first pixel area; a second peripheral area located outside the second pixel area; first scan stages located in the first peripheral area and connected to the first scan lines; and second scan stages located in the second peripheral area and connected to the second scan lines. 14. The display device of claim 13, wherein the second peripheral area has a curved shape. 15. The display device of claim 13, wherein overlapping areas between the second scan lines and the data lines are formed in a region between the second scan stages and the second pixels. 16. The display device of claim 15, wherein a plurality of first overlapping areas are formed between a first second scan line of the second scan lines and the data lines, a plurality of second overlapping areas are formed between a second second scan line of the second scan lines and the data lines, and the first second scan lines is closer to the first pixel area than the second second scan line. 17. The display device of claim 16, wherein a sum of areas of the second overlapping areas is greater than a sum of areas of the first overlapping areas. 18. The display device of claim 16, wherein a number of the first overlapping areas is smaller than a number of the second overlapping areas. 19. A display device, comprising: a substrate including a first pixel area and a second pixel area, wherein the second pixel area is adjacent to the first pixel area, wherein a corner of the second pixel area has a curved shape; the substrate further including a first peripheral area adjacent to the second pixel area and having a curved shape, wherein the first peripheral area includes a plurality of scan drivers, and the second pixel area includes a plurality of pixels arranged in pixel rows, wherein a distance from a scan driver adjacent to the first pixel area to a corresponding pixel is less than a distance from a scan driver farther away from the first pixel area to a corresponding pixel. 20. The display device of claim 19, wherein an amount of overlap between a signal line and a data line in the second pixel area is greater proceeding away from the first pixel area.
2,600
10,869
10,869
15,573,866
2,657
In accordance with an example embodiment of the present invention, disclosed is a method and an apparatus thereof for assisting a selection of an encoding mode for a multi-channel audio signal encoding where different encoding modes may be chosen for the different channels. The method is performed in an audio encoder and comprises obtaining a plurality of audio signal channels and coordinating or synchronizing the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels.
1. A method for assisting a selection of an encoding mode for a multi-channel audio signal encoding where different encoding modes may be chosen for the different channels, the method being performed in an audio encoder and comprising: obtaining a plurality of audio signal channels; and coordinating or synchronizing the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels. 2. The method of claim 1, further comprising applying a coding mode selected for one of the obtained channels for encoding a plurality of the obtained channels. 3. The method of claim 1, further comprising applying a coding mode selected for a combination of at least two of the obtained channels for encoding a plurality of the obtained channels. 4. The method of claim 1, further comprising determining whether coordination of the selection of encoding mode is required, and performing the coordination when it is required. 5. The method of claim 1, further comprising determining of which of the channels require coordination. 6. The method of claim 1 further comprising selecting a master codec instance, wherein the master codec instance imposes its mode decision on other codec instances. 7. The method of claim 1, further comprising encoding the audio signal channels in accordance with the coordinated encoding mode selection. 8. An apparatus for assisting a selection of an encoding mode for a multi-channel audio signal, the apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, causes the apparatus to: obtain a plurality of audio signal channels; and coordinate or synchronize the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels. 9. The apparatus of claim 8, further comprising instructions that, when executed by the processor, causes the apparatus to apply a coding mode selected for one of the obtained channels for encoding a plurality of the obtained channels. 10. The apparatus of claim 8, further comprising instructions that, when executed by the processor, cause the apparatus to apply a coding mode selected for a combination of at least two of the obtained channels for encoding a plurality of the obtained channels. 11. The apparatus of claim 8, further comprising instructions that, when executed by the processor, causes the apparatus to determine whether coordination of the selection of encoding mode is required, and to perform the coordination when it is required. 12. The apparatus of claim 8, wherein the instructions to classify the audio signal comprise instructions that, when executed by the processor, causes the apparatus to determine which of the obtained audio channels require coordination. 13. The apparatus of claim 8, wherein the apparatus is an audio encoder or an audio codec. 14. The apparatus of claim 8, wherein the apparatus is comprised in a host device (2, 5). 15. A computer program product comprising a non-transitory computer readable medium storing a computer program for assisting a selection of an encoding mode for audio, the computer program comprising computer program code which, when run on an apparatus causes the apparatus to: obtain a plurality of audio signal channels; and coordinate or synchronize the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels. 16. (canceled)
In accordance with an example embodiment of the present invention, disclosed is a method and an apparatus thereof for assisting a selection of an encoding mode for a multi-channel audio signal encoding where different encoding modes may be chosen for the different channels. The method is performed in an audio encoder and comprises obtaining a plurality of audio signal channels and coordinating or synchronizing the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels.1. A method for assisting a selection of an encoding mode for a multi-channel audio signal encoding where different encoding modes may be chosen for the different channels, the method being performed in an audio encoder and comprising: obtaining a plurality of audio signal channels; and coordinating or synchronizing the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels. 2. The method of claim 1, further comprising applying a coding mode selected for one of the obtained channels for encoding a plurality of the obtained channels. 3. The method of claim 1, further comprising applying a coding mode selected for a combination of at least two of the obtained channels for encoding a plurality of the obtained channels. 4. The method of claim 1, further comprising determining whether coordination of the selection of encoding mode is required, and performing the coordination when it is required. 5. The method of claim 1, further comprising determining of which of the channels require coordination. 6. The method of claim 1 further comprising selecting a master codec instance, wherein the master codec instance imposes its mode decision on other codec instances. 7. The method of claim 1, further comprising encoding the audio signal channels in accordance with the coordinated encoding mode selection. 8. An apparatus for assisting a selection of an encoding mode for a multi-channel audio signal, the apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, causes the apparatus to: obtain a plurality of audio signal channels; and coordinate or synchronize the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels. 9. The apparatus of claim 8, further comprising instructions that, when executed by the processor, causes the apparatus to apply a coding mode selected for one of the obtained channels for encoding a plurality of the obtained channels. 10. The apparatus of claim 8, further comprising instructions that, when executed by the processor, cause the apparatus to apply a coding mode selected for a combination of at least two of the obtained channels for encoding a plurality of the obtained channels. 11. The apparatus of claim 8, further comprising instructions that, when executed by the processor, causes the apparatus to determine whether coordination of the selection of encoding mode is required, and to perform the coordination when it is required. 12. The apparatus of claim 8, wherein the instructions to classify the audio signal comprise instructions that, when executed by the processor, causes the apparatus to determine which of the obtained audio channels require coordination. 13. The apparatus of claim 8, wherein the apparatus is an audio encoder or an audio codec. 14. The apparatus of claim 8, wherein the apparatus is comprised in a host device (2, 5). 15. A computer program product comprising a non-transitory computer readable medium storing a computer program for assisting a selection of an encoding mode for audio, the computer program comprising computer program code which, when run on an apparatus causes the apparatus to: obtain a plurality of audio signal channels; and coordinate or synchronize the selection of an encoding mode for a plurality of the obtained channels, wherein the coordination is based on an encoding mode selected for one of the obtained channels or for a group of the obtained channels. 16. (canceled)
2,600
10,870
10,870
13,944,454
2,612
An aspect provides a method, including: providing an indication of display screen refresh timing derived from a display system; associating, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and synchronizing, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen. Other aspects are described and claimed.
1. A method, comprising: providing an indication of display screen refresh timing derived from a display system; associating, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and synchronizing, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen. 2. The method of claim 1, wherein said providing comprises communicating the indication of display screen refresh timing between an operating system and the input surface. 3. The method of claim 1, wherein said providing comprises providing a line communicating the indication of display screen refresh timing between the display system and the input surface. 4. The method of claim 1, wherein associating a set of input data derived from the input surface with a display screen refresh interval comprises utilizing one or more buffers to buffer a set of input data derived from the input surface. 5. The method of claim 4, wherein the one or more buffers buffer a set of input data derived from the input surface during a display screen refresh interval. 6. The method of claim 1, wherein the indication of the display screen refresh timing comprises an event associated with a vsync signal. 7. The method of claim 6, wherein the vsync signal is derived from a graphics sub-system. 8. The method of claim 1, wherein associating comprises utilizing timing information of buffered input data derived from the input surface to select the set of input data. 9. The method of claim 1, further comprising processing input data derived from the input surface during a display screen refresh interval. 10. The method of claim 9, wherein the processing of the input data derived from the input surface is scaled based on the display screen refresh interval. 11. An information handling device, comprising: an input surface; a display system; one or more processors; a memory device assessable to the one or more processors and storing code executable by the one or more processors to: provide an indication of display screen refresh timing derived from a display system; associate, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and synchronize, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen. 12. The information handling device of claim 11, wherein to provide comprises communicating the indication of display screen refresh timing between an operating system and the input surface. 13. The information handling device of claim 11, wherein to provide comprises providing a line communicating the indication of display screen refresh timing between the display system and the input surface. 14. The information handling device of claim 11, wherein to associate a set of input data derived from the input surface with a display screen refresh interval comprises utilizing one or more buffers to buffer a set of input data derived from the input surface. 15. The information handling device of claim 14, wherein the one or more buffers buffer a set of input data derived from the input surface during a display screen refresh interval. 16. The information handling device of claim 11, wherein the indication of the display screen refresh timing comprises an event associated with a vsync signal. 17. The information handling device of claim 16, wherein the vsync signal is derived from a graphics sub-system. 18. The information handling device of claim 11, wherein to associate comprises utilizing timing information of buffered input data derived from the input surface to select the set of input data. 19. The information handling device of claim 11, wherein the code is further executable by the one or more processors to process input data derived from the input surface during a display screen refresh interval, wherein processing of the input data derived from the input surface is scaled based on the display screen refresh interval. 20. A program product, comprising: a storage device having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to provide an indication of display screen refresh timing derived from a display system; computer readable program code configured to associate, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and computer readable program code configured to synchronize, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen.
An aspect provides a method, including: providing an indication of display screen refresh timing derived from a display system; associating, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and synchronizing, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen. Other aspects are described and claimed.1. A method, comprising: providing an indication of display screen refresh timing derived from a display system; associating, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and synchronizing, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen. 2. The method of claim 1, wherein said providing comprises communicating the indication of display screen refresh timing between an operating system and the input surface. 3. The method of claim 1, wherein said providing comprises providing a line communicating the indication of display screen refresh timing between the display system and the input surface. 4. The method of claim 1, wherein associating a set of input data derived from the input surface with a display screen refresh interval comprises utilizing one or more buffers to buffer a set of input data derived from the input surface. 5. The method of claim 4, wherein the one or more buffers buffer a set of input data derived from the input surface during a display screen refresh interval. 6. The method of claim 1, wherein the indication of the display screen refresh timing comprises an event associated with a vsync signal. 7. The method of claim 6, wherein the vsync signal is derived from a graphics sub-system. 8. The method of claim 1, wherein associating comprises utilizing timing information of buffered input data derived from the input surface to select the set of input data. 9. The method of claim 1, further comprising processing input data derived from the input surface during a display screen refresh interval. 10. The method of claim 9, wherein the processing of the input data derived from the input surface is scaled based on the display screen refresh interval. 11. An information handling device, comprising: an input surface; a display system; one or more processors; a memory device assessable to the one or more processors and storing code executable by the one or more processors to: provide an indication of display screen refresh timing derived from a display system; associate, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and synchronize, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen. 12. The information handling device of claim 11, wherein to provide comprises communicating the indication of display screen refresh timing between an operating system and the input surface. 13. The information handling device of claim 11, wherein to provide comprises providing a line communicating the indication of display screen refresh timing between the display system and the input surface. 14. The information handling device of claim 11, wherein to associate a set of input data derived from the input surface with a display screen refresh interval comprises utilizing one or more buffers to buffer a set of input data derived from the input surface. 15. The information handling device of claim 14, wherein the one or more buffers buffer a set of input data derived from the input surface during a display screen refresh interval. 16. The information handling device of claim 11, wherein the indication of the display screen refresh timing comprises an event associated with a vsync signal. 17. The information handling device of claim 16, wherein the vsync signal is derived from a graphics sub-system. 18. The information handling device of claim 11, wherein to associate comprises utilizing timing information of buffered input data derived from the input surface to select the set of input data. 19. The information handling device of claim 11, wherein the code is further executable by the one or more processors to process input data derived from the input surface during a display screen refresh interval, wherein processing of the input data derived from the input surface is scaled based on the display screen refresh interval. 20. A program product, comprising: a storage device having computer readable program code stored therewith, the computer readable program code comprising: computer readable program code configured to provide an indication of display screen refresh timing derived from a display system; computer readable program code configured to associate, using the indication of the display screen refresh timing, a set of input data derived from an input surface with a display screen refresh interval; and computer readable program code configured to synchronize, using one or more processors, the set of input data derived from the input surface with a refresh of a display screen.
2,600
10,871
10,871
16,110,087
2,611
A vehicle display device includes a vehicle front side camera that acquires a front area image, an image analyzing unit that detects a position of white lines and a position of a preceding vehicle in a case in which there is the preceding vehicle on a lane from the front area image, and a controller that sets a drawing area on the lane on the basis of the position of the white lines or the position of the white lines and the position of the preceding vehicle and draws an information image including route guidance information in the drawing area on the basis of the shape of the drawing area.
1. A vehicle display device comprising: an image display unit that projects a display image in front of a driver of a vehicle and causes the display image to be displayed superimposed on a real landscape in front of the vehicle; a front area image acquiring unit that captures the real landscape in front of the vehicle and acquires a front area image; a white line detecting unit that detects a position of a pair of colored lines sandwiching a lane extending to an area in front of the vehicle and a position of a preceding vehicle in a case in which there is a preceding vehicle on the lane from the front area image; a area setting unit that sets a drawing area on the lane on the basis of the positions of the pair of colored lines or the positions of the pair of colored lines and the position of the preceding vehicle; and a drawing unit that draws an information image including route guidance information to be informed to the driver in the drawing area on the basis of a shape of the drawing area, wherein the drawing unit transforms the information image in accordance with a change in a shape of the drawing area in the display image. 2. The vehicle display device according to claim 1, wherein the drawing area setting unit sets the drawing area in an area extending on the lane from the area in front of the vehicle along the colored line without overlapping the preceding vehicle on the basis of coordinates indicating the position of the colored line in the front area image or coordinates indicating the position of the colored line in the front area image and coordinates indicating the position of the preceding vehicle. 3. The vehicle display device according to claim 1, further comprising: a driver image acquiring unit that captures the driver and acquires a driver image; and an eye point detecting unit that detects a position of an eye point of the driver from the driver image, wherein the drawing unit adjusts the position of the drawing area in the display image in accordance with the position of the eye point. 4. The vehicle display device according to claim 2, further comprising: a driver image acquiring unit that captures the driver and acquires a driver image; and an eye point detecting unit that detects a position of an eye point of the driver from the driver image, wherein the drawing unit adjusts the position of the drawing area in the display image in accordance with the position of the eye point. 5. A display control method of a vehicle display device including an image display unit that projects a display image in front of a driver of a vehicle and causes the display image to be displayed superimposed on a real landscape in front of the vehicle, the display control method comprising: a front area image acquisition step of capturing the real landscape in front of the vehicle and acquiring a front area image; a white line detection step of detecting a position of a pair of colored lines sandwiching a lane extending to an area in front of the vehicle and a position of a preceding vehicle in a case in which there is a preceding vehicle on the lane from the front area image; a drawing area setting step of setting a drawing area on the lane on the basis of the positions of the pair of colored lines or the positions of the pair of colored lines and the position of the preceding vehicle; and a drawing step of drawing an information image including route guidance information to be informed to the driver in the drawing area on the basis of a shape of the drawing area, wherein the drawing step includes transforming the information image in accordance with a change in a shape of the drawing area in the display image.
A vehicle display device includes a vehicle front side camera that acquires a front area image, an image analyzing unit that detects a position of white lines and a position of a preceding vehicle in a case in which there is the preceding vehicle on a lane from the front area image, and a controller that sets a drawing area on the lane on the basis of the position of the white lines or the position of the white lines and the position of the preceding vehicle and draws an information image including route guidance information in the drawing area on the basis of the shape of the drawing area.1. A vehicle display device comprising: an image display unit that projects a display image in front of a driver of a vehicle and causes the display image to be displayed superimposed on a real landscape in front of the vehicle; a front area image acquiring unit that captures the real landscape in front of the vehicle and acquires a front area image; a white line detecting unit that detects a position of a pair of colored lines sandwiching a lane extending to an area in front of the vehicle and a position of a preceding vehicle in a case in which there is a preceding vehicle on the lane from the front area image; a area setting unit that sets a drawing area on the lane on the basis of the positions of the pair of colored lines or the positions of the pair of colored lines and the position of the preceding vehicle; and a drawing unit that draws an information image including route guidance information to be informed to the driver in the drawing area on the basis of a shape of the drawing area, wherein the drawing unit transforms the information image in accordance with a change in a shape of the drawing area in the display image. 2. The vehicle display device according to claim 1, wherein the drawing area setting unit sets the drawing area in an area extending on the lane from the area in front of the vehicle along the colored line without overlapping the preceding vehicle on the basis of coordinates indicating the position of the colored line in the front area image or coordinates indicating the position of the colored line in the front area image and coordinates indicating the position of the preceding vehicle. 3. The vehicle display device according to claim 1, further comprising: a driver image acquiring unit that captures the driver and acquires a driver image; and an eye point detecting unit that detects a position of an eye point of the driver from the driver image, wherein the drawing unit adjusts the position of the drawing area in the display image in accordance with the position of the eye point. 4. The vehicle display device according to claim 2, further comprising: a driver image acquiring unit that captures the driver and acquires a driver image; and an eye point detecting unit that detects a position of an eye point of the driver from the driver image, wherein the drawing unit adjusts the position of the drawing area in the display image in accordance with the position of the eye point. 5. A display control method of a vehicle display device including an image display unit that projects a display image in front of a driver of a vehicle and causes the display image to be displayed superimposed on a real landscape in front of the vehicle, the display control method comprising: a front area image acquisition step of capturing the real landscape in front of the vehicle and acquiring a front area image; a white line detection step of detecting a position of a pair of colored lines sandwiching a lane extending to an area in front of the vehicle and a position of a preceding vehicle in a case in which there is a preceding vehicle on the lane from the front area image; a drawing area setting step of setting a drawing area on the lane on the basis of the positions of the pair of colored lines or the positions of the pair of colored lines and the position of the preceding vehicle; and a drawing step of drawing an information image including route guidance information to be informed to the driver in the drawing area on the basis of a shape of the drawing area, wherein the drawing step includes transforming the information image in accordance with a change in a shape of the drawing area in the display image.
2,600
10,872
10,872
15,837,263
2,697
Systems and methods are disclosed for image signal processing. For example, methods may include determining an orientation setpoint for an image sensor; based on a sequence of orientation estimates for the image sensor and the orientation setpoint, invoking a mechanical stabilization system to adjust an orientation of the image sensor toward the orientation setpoint; receiving an image from the image sensor; determining an orientation error between the orientation of the image sensor and the orientation setpoint during capture of the image; based on the orientation error, invoking an electronic image stabilization module to correct the image for a rotation corresponding to the orientation error to obtain a stabilized image; and storing, displaying, or transmitting an output image based on the stabilized image.
1. A system comprising: an image sensor configured to capture an image; one or more motion sensors configured to detect motion of the image sensor; a mechanical stabilization system, including gimbals and motors, configured to control an orientation of the image sensor; an electronic image stabilization module configured to correct images for rotations of the image sensor; and a processing apparatus that is configured to: determine a sequence of orientation estimates based on sensor data from the one or more motion sensors; determine an orientation setpoint for the image sensor; based on the sequence of orientation estimates and the orientation setpoint, invoke the mechanical stabilization system to adjust the orientation of the image sensor; receive the image from the image sensor; determine an orientation error between the orientation of the image sensor and the orientation setpoint during capture of the image; based on the orientation error, invoke the electronic image stabilization module to correct the image for a rotation corresponding to the orientation error to obtain a stabilized image; and store, display, or transmit an output image based on the stabilized image. 2. The system of claim 1, comprising a drone that is coupled to a housing of the image sensor by the gimbals of the mechanical stabilization system. 3. The system of claim 1, in which the processing apparatus is configured to: store a sequence of images captured after the image in a buffer; and determine the rotation corresponding to the orientation error based on orientation estimates from the sequence of orientation estimates corresponding to the sequence of images. 4. The system of claim 3, in which the processing apparatus is configured to: determine a trajectory based on the sequence of orientation estimates corresponding to the sequence of images; and determine the rotation corresponding to the orientation error based on the trajectory. 5. The system of claim 4, in which the processing apparatus is configured to: determine a sequence of orientation errors based on the sequence of orientation estimates corresponding to the sequence of images and the orientation setpoint; and apply a filter to the sequence of orientation errors to obtain the trajectory. 6. The system of claim 1, in which the image is captured with an electronic rolling shutter, the orientation error is a first orientation error associated with a first portion of the image, and the processing apparatus is configured to: determine a second orientation error between the orientation of the image sensor and the orientation setpoint during capture of a second portion of the image; and based on the second orientation error, invoke the electronic image stabilization module to correct the second portion of the image for a rotation corresponding to the second orientation error to obtain the stabilized image. 7. The system of claim 1, in which the one or more motion sensors include encoders configured to detect a position and an orientation of the image sensor relative to a movable platform. 8. The system of claim 1, in which the orientation setpoint includes a quaternion. 9. The system of claim 1, in which the electronic image stabilization module is implemented by software executed by the processing apparatus. 10. A method comprising: determining an orientation setpoint for an image sensor; based on a sequence of orientation estimates for the image sensor and the orientation setpoint, invoking a mechanical stabilization system to adjust an orientation of the image sensor toward the orientation setpoint; receiving an image from the image sensor; determining an orientation error between the orientation of the image sensor and the orientation setpoint during capture of the image; based on the orientation error, invoking an electronic image stabilization module to correct the image for a rotation corresponding to the orientation error to obtain a stabilized image; and storing, displaying, or transmitting an output image based on the stabilized image. 11. The method of claim 10, comprising: storing a sequence of images captured after the image in a buffer; and determining the rotation corresponding to the orientation error based on orientation estimates from the sequence of orientation estimates corresponding to the sequence of images. 12. The method of claim 11, comprising: determining a trajectory based on the sequence of orientation estimates corresponding to the sequence of images; and determining the rotation corresponding to the orientation error based on the trajectory. 13. The method of claim 12, comprising: determining a sequence of orientation errors based on the sequence of orientation estimates corresponding to the sequence of images and the orientation setpoint; and applying a filter to the sequence of orientation errors to obtain the trajectory. 14. The method of claim 10, in which the image is captured with an electronic rolling shutter, the orientation error is a first orientation error associated with a first portion of the image, and further comprising: determining a second orientation error between the orientation of the image sensor and the orientation setpoint during capture of a second portion of the image; and based on the second orientation error, invoking the electronic image stabilization module to correct the second portion of the image for a rotation corresponding to the second orientation error to obtain the stabilized image. 15. The method of claim 10, in which an orientation in the sequence of orientation estimates includes an estimate of orientation of the image sensor with respect to a movable platform and an estimate of orientation of the image sensor with respect to gravity. 16. The method of claim 10, in which the mechanical stabilization system includes gimbals and motors controlled by proportional integral derivative controllers. 17. A system comprising: an image sensor configured to capture an image; a mechanical stabilization system, including motors, configured to control an orientation of the image sensor to match an orientation setpoint; and an electronic image stabilization module configured to correct the image for a rotation of the image sensor corresponding to orientation errors between the orientation of the image sensor and the orientation setpoint during capture of the image. 18. The system of claim 17, comprising a drone that is coupled to a housing of the image sensor by the mechanical stabilization system. 19. The system of claim 17, comprising: a buffer configured to store a sequence of images captured by the image sensor; and a fast trajectory generator configured to determine a sequence of rotations based on orientation errors corresponding to images in the buffer, wherein the electronic image stabilization module is configured to use a rotation from the sequence of rotations corresponding to an oldest image stored in the buffer. 20. The system of claim 17, comprising: a motion tracking module including one or more motion sensors configured to detect motion of the image sensor and determine a sequence of orientation estimates for the image sensor, wherein the mechanical stabilization system is configured to take the sequence of orientation estimates as input and use the sequence of orientation estimates as feedback for controlling the orientation of the image sensor.
Systems and methods are disclosed for image signal processing. For example, methods may include determining an orientation setpoint for an image sensor; based on a sequence of orientation estimates for the image sensor and the orientation setpoint, invoking a mechanical stabilization system to adjust an orientation of the image sensor toward the orientation setpoint; receiving an image from the image sensor; determining an orientation error between the orientation of the image sensor and the orientation setpoint during capture of the image; based on the orientation error, invoking an electronic image stabilization module to correct the image for a rotation corresponding to the orientation error to obtain a stabilized image; and storing, displaying, or transmitting an output image based on the stabilized image.1. A system comprising: an image sensor configured to capture an image; one or more motion sensors configured to detect motion of the image sensor; a mechanical stabilization system, including gimbals and motors, configured to control an orientation of the image sensor; an electronic image stabilization module configured to correct images for rotations of the image sensor; and a processing apparatus that is configured to: determine a sequence of orientation estimates based on sensor data from the one or more motion sensors; determine an orientation setpoint for the image sensor; based on the sequence of orientation estimates and the orientation setpoint, invoke the mechanical stabilization system to adjust the orientation of the image sensor; receive the image from the image sensor; determine an orientation error between the orientation of the image sensor and the orientation setpoint during capture of the image; based on the orientation error, invoke the electronic image stabilization module to correct the image for a rotation corresponding to the orientation error to obtain a stabilized image; and store, display, or transmit an output image based on the stabilized image. 2. The system of claim 1, comprising a drone that is coupled to a housing of the image sensor by the gimbals of the mechanical stabilization system. 3. The system of claim 1, in which the processing apparatus is configured to: store a sequence of images captured after the image in a buffer; and determine the rotation corresponding to the orientation error based on orientation estimates from the sequence of orientation estimates corresponding to the sequence of images. 4. The system of claim 3, in which the processing apparatus is configured to: determine a trajectory based on the sequence of orientation estimates corresponding to the sequence of images; and determine the rotation corresponding to the orientation error based on the trajectory. 5. The system of claim 4, in which the processing apparatus is configured to: determine a sequence of orientation errors based on the sequence of orientation estimates corresponding to the sequence of images and the orientation setpoint; and apply a filter to the sequence of orientation errors to obtain the trajectory. 6. The system of claim 1, in which the image is captured with an electronic rolling shutter, the orientation error is a first orientation error associated with a first portion of the image, and the processing apparatus is configured to: determine a second orientation error between the orientation of the image sensor and the orientation setpoint during capture of a second portion of the image; and based on the second orientation error, invoke the electronic image stabilization module to correct the second portion of the image for a rotation corresponding to the second orientation error to obtain the stabilized image. 7. The system of claim 1, in which the one or more motion sensors include encoders configured to detect a position and an orientation of the image sensor relative to a movable platform. 8. The system of claim 1, in which the orientation setpoint includes a quaternion. 9. The system of claim 1, in which the electronic image stabilization module is implemented by software executed by the processing apparatus. 10. A method comprising: determining an orientation setpoint for an image sensor; based on a sequence of orientation estimates for the image sensor and the orientation setpoint, invoking a mechanical stabilization system to adjust an orientation of the image sensor toward the orientation setpoint; receiving an image from the image sensor; determining an orientation error between the orientation of the image sensor and the orientation setpoint during capture of the image; based on the orientation error, invoking an electronic image stabilization module to correct the image for a rotation corresponding to the orientation error to obtain a stabilized image; and storing, displaying, or transmitting an output image based on the stabilized image. 11. The method of claim 10, comprising: storing a sequence of images captured after the image in a buffer; and determining the rotation corresponding to the orientation error based on orientation estimates from the sequence of orientation estimates corresponding to the sequence of images. 12. The method of claim 11, comprising: determining a trajectory based on the sequence of orientation estimates corresponding to the sequence of images; and determining the rotation corresponding to the orientation error based on the trajectory. 13. The method of claim 12, comprising: determining a sequence of orientation errors based on the sequence of orientation estimates corresponding to the sequence of images and the orientation setpoint; and applying a filter to the sequence of orientation errors to obtain the trajectory. 14. The method of claim 10, in which the image is captured with an electronic rolling shutter, the orientation error is a first orientation error associated with a first portion of the image, and further comprising: determining a second orientation error between the orientation of the image sensor and the orientation setpoint during capture of a second portion of the image; and based on the second orientation error, invoking the electronic image stabilization module to correct the second portion of the image for a rotation corresponding to the second orientation error to obtain the stabilized image. 15. The method of claim 10, in which an orientation in the sequence of orientation estimates includes an estimate of orientation of the image sensor with respect to a movable platform and an estimate of orientation of the image sensor with respect to gravity. 16. The method of claim 10, in which the mechanical stabilization system includes gimbals and motors controlled by proportional integral derivative controllers. 17. A system comprising: an image sensor configured to capture an image; a mechanical stabilization system, including motors, configured to control an orientation of the image sensor to match an orientation setpoint; and an electronic image stabilization module configured to correct the image for a rotation of the image sensor corresponding to orientation errors between the orientation of the image sensor and the orientation setpoint during capture of the image. 18. The system of claim 17, comprising a drone that is coupled to a housing of the image sensor by the mechanical stabilization system. 19. The system of claim 17, comprising: a buffer configured to store a sequence of images captured by the image sensor; and a fast trajectory generator configured to determine a sequence of rotations based on orientation errors corresponding to images in the buffer, wherein the electronic image stabilization module is configured to use a rotation from the sequence of rotations corresponding to an oldest image stored in the buffer. 20. The system of claim 17, comprising: a motion tracking module including one or more motion sensors configured to detect motion of the image sensor and determine a sequence of orientation estimates for the image sensor, wherein the mechanical stabilization system is configured to take the sequence of orientation estimates as input and use the sequence of orientation estimates as feedback for controlling the orientation of the image sensor.
2,600
10,873
10,873
16,589,009
2,653
A computer-implemented method and system for improving caller verification is provided. The method comprises registering an intended communications session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to registering the intended communication session, receiving a request for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key.
1. A computer-implemented method for improving caller verification, the method comprising: prior to initiating an intended communication session, registering the intended communication session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to initiation of the intended communication session, receiving a request for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key. 2. The method of claim 1, wherein the key comprises a first hash of the first call time window identifier, a first caller data, and a first caller data, and wherein the comparison key comprises a second hash of the second call window, a second caller data, and a second caller data. 3. The method of claim 1, wherein generating the key further comprises using a first cryptographic salt or a first shared communications token, and wherein generating the comparison key further comprises using a second cryptographic salt or a second shared communications token. 4. The method of claim 1, wherein comparing the comparison key with the key comprises determining that the comparison key matches the key, and wherein verifying the intended communication session comprises causing the sending of a notification of a verified call. 5. The method of claim 1, wherein comparing the comparison key with the key comprises determining that the comparison key does not match the key, and wherein verifying the intended communication session comprises causing the sending of a notification of an unverified call. 6. The method of claim 1, further comprising: pre-registering a set of caller information, wherein the set of caller information comprises at least a caller phone number; pre-registering a set of caller information, wherein the set of caller information comprises at least a callee phone number; and wherein generating the key comprises using the caller phone number and the caller phone number. 7. The method of claim 6, wherein generating the comparison key comprises using the caller phone number and the caller phone number. 8. The method of claim 1, wherein the intended communication session is a telephone call or a video call. 9. A non-transitory, computer-readable medium storing a set of instructions that, when executed by a processor, cause: registering an intended communication session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to initiation of the intended communication session, receiving a request from a caller's device for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key. 10. The non-transitory, computer-readable medium of claim 9, wherein generating the key comprises hashing the first call time window identifier, a first caller data, and a first caller data using a hash algorithm, and wherein generating the comparison key comprises hashing the second call time window identifier, a second caller data, and a second caller data using the hash algorithm. 11. The non-transitory, computer-readable medium of claim 9, wherein generating the key further comprises using a first cryptographic salt or a first shared communications token, and wherein generating the comparison key further comprises using a second cryptographic salt or a second shared communications token. 12. The non-transitory, computer-readable medium of claim 9, wherein comparing the comparison key with the key comprises determining that the comparison key matches the key, and wherein verifying the intended communication session comprises causing the sending of a notification of a verified call. 13. The non-transitory, computer-readable medium of claim 9, wherein comparing the comparison key with the key comprises determining that the comparison key does not match the key, and wherein verifying the intended communication session comprises causing the sending of a notification of an unverified call. 14. The non-transitory, computer-readable medium of claim 9, wherein the first call time window identifier is generated using an initiation time of the intended communication session. 15. The non-transitory, computer-readable medium of claim 9, storing a set of further instructions that, when executed by the processor, cause: registering a termination event; and in response to registering the termination event, unregistering the intended communication session. 16. The non-transitory, computer-readable medium of claim 9, wherein the intended communication session is a telephone call or a video call. 17. A system for improving caller verification, the system comprising: a processor; a memory operatively connected to the processor and storing instructions that, when executed by the processor, cause: prior to initiation of an intended communication session, registering the intended communication session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to initiation of the intended communication session, receiving a request for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key. 18. The system of claim 17, wherein the memory stores further instructions that, when executed by the processor, cause: pre-registering a set of caller information, wherein the set of caller information comprises at least a caller phone number; and wherein generating the key comprises using the caller phone number. 19. The system of claim 18, wherein the first call time window identifier is generated using an initiation time of the intended communication session. 20. The system of claim 18, wherein the memory stores further instructions that, when executed by the processor, cause: registering a termination event; and in response to registering the termination event, unregistering the intended communication session. 21. The system of claim 18, wherein generating the key comprises hashing the first call time window identifier, a first caller data, and a first caller data using a hash algorithm. 22. The system of claim 18, wherein generating the comparison key comprises hashing the second call time window identifier, a second caller data, and a second caller data using the hash algorithm. 23. The system of claim 18, wherein the intended communication session is a telephone call or a video call.
A computer-implemented method and system for improving caller verification is provided. The method comprises registering an intended communications session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to registering the intended communication session, receiving a request for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key.1. A computer-implemented method for improving caller verification, the method comprising: prior to initiating an intended communication session, registering the intended communication session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to initiation of the intended communication session, receiving a request for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key. 2. The method of claim 1, wherein the key comprises a first hash of the first call time window identifier, a first caller data, and a first caller data, and wherein the comparison key comprises a second hash of the second call window, a second caller data, and a second caller data. 3. The method of claim 1, wherein generating the key further comprises using a first cryptographic salt or a first shared communications token, and wherein generating the comparison key further comprises using a second cryptographic salt or a second shared communications token. 4. The method of claim 1, wherein comparing the comparison key with the key comprises determining that the comparison key matches the key, and wherein verifying the intended communication session comprises causing the sending of a notification of a verified call. 5. The method of claim 1, wherein comparing the comparison key with the key comprises determining that the comparison key does not match the key, and wherein verifying the intended communication session comprises causing the sending of a notification of an unverified call. 6. The method of claim 1, further comprising: pre-registering a set of caller information, wherein the set of caller information comprises at least a caller phone number; pre-registering a set of caller information, wherein the set of caller information comprises at least a callee phone number; and wherein generating the key comprises using the caller phone number and the caller phone number. 7. The method of claim 6, wherein generating the comparison key comprises using the caller phone number and the caller phone number. 8. The method of claim 1, wherein the intended communication session is a telephone call or a video call. 9. A non-transitory, computer-readable medium storing a set of instructions that, when executed by a processor, cause: registering an intended communication session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to initiation of the intended communication session, receiving a request from a caller's device for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key. 10. The non-transitory, computer-readable medium of claim 9, wherein generating the key comprises hashing the first call time window identifier, a first caller data, and a first caller data using a hash algorithm, and wherein generating the comparison key comprises hashing the second call time window identifier, a second caller data, and a second caller data using the hash algorithm. 11. The non-transitory, computer-readable medium of claim 9, wherein generating the key further comprises using a first cryptographic salt or a first shared communications token, and wherein generating the comparison key further comprises using a second cryptographic salt or a second shared communications token. 12. The non-transitory, computer-readable medium of claim 9, wherein comparing the comparison key with the key comprises determining that the comparison key matches the key, and wherein verifying the intended communication session comprises causing the sending of a notification of a verified call. 13. The non-transitory, computer-readable medium of claim 9, wherein comparing the comparison key with the key comprises determining that the comparison key does not match the key, and wherein verifying the intended communication session comprises causing the sending of a notification of an unverified call. 14. The non-transitory, computer-readable medium of claim 9, wherein the first call time window identifier is generated using an initiation time of the intended communication session. 15. The non-transitory, computer-readable medium of claim 9, storing a set of further instructions that, when executed by the processor, cause: registering a termination event; and in response to registering the termination event, unregistering the intended communication session. 16. The non-transitory, computer-readable medium of claim 9, wherein the intended communication session is a telephone call or a video call. 17. A system for improving caller verification, the system comprising: a processor; a memory operatively connected to the processor and storing instructions that, when executed by the processor, cause: prior to initiation of an intended communication session, registering the intended communication session by generating a key using, at least, a first call time window identifier, and storing the key in a database; in response to initiation of the intended communication session, receiving a request for caller verification, wherein the request comprises data representing a second call time window identifier; in response to receiving the request for caller verification, generating a comparison key based on the request; comparing the comparison key with the key stored in the database; and verifying the intended communication session in response to comparing the comparison key with the key. 18. The system of claim 17, wherein the memory stores further instructions that, when executed by the processor, cause: pre-registering a set of caller information, wherein the set of caller information comprises at least a caller phone number; and wherein generating the key comprises using the caller phone number. 19. The system of claim 18, wherein the first call time window identifier is generated using an initiation time of the intended communication session. 20. The system of claim 18, wherein the memory stores further instructions that, when executed by the processor, cause: registering a termination event; and in response to registering the termination event, unregistering the intended communication session. 21. The system of claim 18, wherein generating the key comprises hashing the first call time window identifier, a first caller data, and a first caller data using a hash algorithm. 22. The system of claim 18, wherein generating the comparison key comprises hashing the second call time window identifier, a second caller data, and a second caller data using the hash algorithm. 23. The system of claim 18, wherein the intended communication session is a telephone call or a video call.
2,600
10,874
10,874
16,240,495
2,685
In various embodiments, a moodroof media application provides media content into a vehicle cabin. In operation, the moodroof media application determines at least one state associated with the vehicle cabin based on sensor data from at least one sensor. The moodroof media application then receives media content associated with the at least one state. The moodroof media application further causes visual content associated with the media content to be displayed on at least one display surface positioned on a ceiling of the vehicle cabin.
1. A computer-implemented method for presenting media content within a vehicle cabin, comprising: determining at least one state associated with the vehicle cabin based on sensor data from at least one sensor; receiving media content associated with the at least one state; and causing visual content associated with the media content to be displayed on at least one display surface positioned on a ceiling of the vehicle cabin. 2. The method of claim 1, wherein the at least one display surface is disposed vertically below a sunroof of the vehicle cabin. 3. The method of claim 1, wherein the at least one sensor comprises at least one of a vehicle sensor, an environmental sensor, and a biometric sensor. 4. The method of claim 1, wherein the at least one state comprises at least one of a vehicle state, an environmental state, and an emotional state of an occupant of the vehicle cabin. 5. The method of claim 1, wherein the at least one display surface comprises a first display device and a second display device, wherein the first display device is adjacent the second display surface, and both the first display surface and the second display device display the visual content towards an interior of the vehicle cabin. 6. The method of claim 1, wherein the visual content comprises visual content received from a device associated with an occupant of the vehicle cabin, and the device is communicatively coupled to an infotainment system of the vehicle cabin. 7. The method of claim 1, wherein the at least one display surface runs parallel to the ceiling of the vehicle cabin. 8. A system for displaying media content in a vehicle cabin, comprising: a first display surface positioned on a ceiling of the vehicle cabin; a second display surface positioned on the ceiling of the vehicle cabin, wherein the first display surface is movable relative to the second display surface; a memory storing a media application; and a processor coupled to the memory, wherein the processor, when executing the media application, causes the first display surface and the second display surface to display media content in the vehicle cabin. 9. The system of claim 8, wherein causing the first display surface and the second display surface to display media content comprises transmitting the media content to one or more projectors, wherein the one or more projectors project the media content onto the first display surface and the second display surface. 10. The system of claim 8, wherein the first display surface is at least partially transparent. 11. The system of claim 8, wherein, when the first display surface is in a closed position, the first display surface is horizontally adjacent to the second display surface. 12. The system of claim 11, wherein, when the first display surface is in an open position, the first display surface is positioned vertically above the second display surface. 13. The system of claim 8, wherein, when executed by the processor, the media application further causes the processor to cause the first display surface to cease displaying media content when the first display surface is in an open position. 14. The system of claim 8, wherein the first display surface is disposed vertically below a sunroof panel, wherein, when both the first display surface and the sunroof panel are in open positions, a sunroof of the vehicle cabin is open to an external environment. 15. One or more non-transitory computer-readable storage media including instructions that, when executed by a processor, cause the processor to present media content by performing the steps of: determining a first emotional state associated with at least one occupant within a vehicle cabin; receiving media content associated with the first emotional state; and causing visual content associated with the media content to be displayed on at least one display surface positioned on a ceiling of the vehicle cabin. 16. The one or more computer-readable storage media of claim 15, wherein the visual content comprises at least one of still images and video associated with the first emotional state. 17. The one or more computer-readable storage media of claim 15, wherein receiving the media content associated with the first emotional state comprises: determining a goal emotional state based on the first emotional state; and acquiring the media content based on the goal emotional state. 18. The one or more computer-readable storage media of claim 15, wherein receiving the media content associated with the first emotional state comprises: receiving a user input associated with a goal emotional state; and selecting the media content based on the goal emotional state. 19. The one or more computer-readable storage media of claim 15, wherein the at least one display surface comprises a first display surface and a second display surface, wherein the first display surface is moveable relative to a second display surface positioned on the ceiling of the vehicle cabin. 20. The one or more computer-readable storage media of claim 15, further comprising determining at least one of a vehicle state and an environmental state, and wherein receiving the media content associated with the first emotional state comprises acquiring the media content that is further based on the at least one of the vehicle state and the environmental state.
In various embodiments, a moodroof media application provides media content into a vehicle cabin. In operation, the moodroof media application determines at least one state associated with the vehicle cabin based on sensor data from at least one sensor. The moodroof media application then receives media content associated with the at least one state. The moodroof media application further causes visual content associated with the media content to be displayed on at least one display surface positioned on a ceiling of the vehicle cabin.1. A computer-implemented method for presenting media content within a vehicle cabin, comprising: determining at least one state associated with the vehicle cabin based on sensor data from at least one sensor; receiving media content associated with the at least one state; and causing visual content associated with the media content to be displayed on at least one display surface positioned on a ceiling of the vehicle cabin. 2. The method of claim 1, wherein the at least one display surface is disposed vertically below a sunroof of the vehicle cabin. 3. The method of claim 1, wherein the at least one sensor comprises at least one of a vehicle sensor, an environmental sensor, and a biometric sensor. 4. The method of claim 1, wherein the at least one state comprises at least one of a vehicle state, an environmental state, and an emotional state of an occupant of the vehicle cabin. 5. The method of claim 1, wherein the at least one display surface comprises a first display device and a second display device, wherein the first display device is adjacent the second display surface, and both the first display surface and the second display device display the visual content towards an interior of the vehicle cabin. 6. The method of claim 1, wherein the visual content comprises visual content received from a device associated with an occupant of the vehicle cabin, and the device is communicatively coupled to an infotainment system of the vehicle cabin. 7. The method of claim 1, wherein the at least one display surface runs parallel to the ceiling of the vehicle cabin. 8. A system for displaying media content in a vehicle cabin, comprising: a first display surface positioned on a ceiling of the vehicle cabin; a second display surface positioned on the ceiling of the vehicle cabin, wherein the first display surface is movable relative to the second display surface; a memory storing a media application; and a processor coupled to the memory, wherein the processor, when executing the media application, causes the first display surface and the second display surface to display media content in the vehicle cabin. 9. The system of claim 8, wherein causing the first display surface and the second display surface to display media content comprises transmitting the media content to one or more projectors, wherein the one or more projectors project the media content onto the first display surface and the second display surface. 10. The system of claim 8, wherein the first display surface is at least partially transparent. 11. The system of claim 8, wherein, when the first display surface is in a closed position, the first display surface is horizontally adjacent to the second display surface. 12. The system of claim 11, wherein, when the first display surface is in an open position, the first display surface is positioned vertically above the second display surface. 13. The system of claim 8, wherein, when executed by the processor, the media application further causes the processor to cause the first display surface to cease displaying media content when the first display surface is in an open position. 14. The system of claim 8, wherein the first display surface is disposed vertically below a sunroof panel, wherein, when both the first display surface and the sunroof panel are in open positions, a sunroof of the vehicle cabin is open to an external environment. 15. One or more non-transitory computer-readable storage media including instructions that, when executed by a processor, cause the processor to present media content by performing the steps of: determining a first emotional state associated with at least one occupant within a vehicle cabin; receiving media content associated with the first emotional state; and causing visual content associated with the media content to be displayed on at least one display surface positioned on a ceiling of the vehicle cabin. 16. The one or more computer-readable storage media of claim 15, wherein the visual content comprises at least one of still images and video associated with the first emotional state. 17. The one or more computer-readable storage media of claim 15, wherein receiving the media content associated with the first emotional state comprises: determining a goal emotional state based on the first emotional state; and acquiring the media content based on the goal emotional state. 18. The one or more computer-readable storage media of claim 15, wherein receiving the media content associated with the first emotional state comprises: receiving a user input associated with a goal emotional state; and selecting the media content based on the goal emotional state. 19. The one or more computer-readable storage media of claim 15, wherein the at least one display surface comprises a first display surface and a second display surface, wherein the first display surface is moveable relative to a second display surface positioned on the ceiling of the vehicle cabin. 20. The one or more computer-readable storage media of claim 15, further comprising determining at least one of a vehicle state and an environmental state, and wherein receiving the media content associated with the first emotional state comprises acquiring the media content that is further based on the at least one of the vehicle state and the environmental state.
2,600
10,875
10,875
15,790,961
2,636
An apparatus is provided that includes a modulator and an optical transmitter coupled to the modulator and configured to emit an optical beam that the modulator is configured to modulate with data. The optical transmitter may thereby be configured to emit the optical beam carrying the data and without artificial confinement for receipt by an optical receiver configured to detect and recover the data from the optical beam. The optical transmitter may be configured to emit the optical beam with a divergence angle greater than 0.1 degrees, and with a photonic efficiency of less than 0.05%. The photonic efficiency may relate a number of photons of the optical beam detectable by the optical receiver, to a number of photons of the optical beam emitted by the optical transmitter.
1. An apparatus comprising: a demodulator; and an optical receiver coupled to the demodulator and configured to detect an optical beam that carries data the demodulator is configured to recover, the optical receiver being configured to detect the optical beam emitted without artificial confinement from an optical transmitter configured to emit the optical beam modulated with the data, wherein the optical receiver is configured to detect the optical beam emitted with a divergence angle greater than 0.1 degrees, and with a photonic efficiency of less than 0.05%, the photonic efficiency relating a number of photons of the optical beam detectable by the optical receiver, to a number of photons of the optical beam emitted by the optical transmitter. 2. The apparatus of claim 1, wherein the optical receiver is configured to detect the optical beam at least in some instances in which the optical receiver does not have a line-of-sight to the optical transmitter. 3. The apparatus of claim 1 comprising an array of optical receivers including the optical receiver, or the optical receiver includes an array of detectors configured to detect the optical beam, and wherein optical receivers of the array of optical receivers or detectors of the array of detectors are configured to selectively activate and deactivate based on their orientation with respect to the optical transmitter. 4. The apparatus of claim 1, wherein the optical beam includes an incident beam, and a reflected beam produced by reflection of the incident beam, the optical receiver in at least one instance being configured to preferentially detect the reflected beam, and avoid direct detection of the incident beam. 5. The apparatus of claim 1, wherein the optical receiver includes an array of detectors configured to detect the optical beam, the array having a size larger than a spot size of the optical beam at the optical transmitter. 6. The apparatus of claim 5, wherein the optical beam is spatially multiplexed to serve multiple optical receivers, the array of detectors being arranged in a pattern of detectors that corresponds to a pattern of emitters of the optical transmitter that are independently modulated. 7. The apparatus of claim 1 further comprising a wavelength-specific or wavelength-tunable filter to enable the optical receiver to detect the optical beam that is spectral multiplexed to serve multiple optical receivers. 8. The apparatus of claim 1, wherein the optical receiver is configured to detect the optical beam with an adjustable focus to facilitate a match of the optical receiver to characteristics of the optical beam, the adjustable focus in at least one instance including focus of the optical receiver at some intermediate point between the optical transmitter and optical receiver. 9. The apparatus of claim 1 further configured to receive a heartbeat signal for orientation of the optical receiver and optical transmitter, the heartbeat signal being modulated to carry or indicate a location of the optical transmitter, or a signal to cause the optical receiver to return its location or an indication of its location to the optical transmitter. 10. The apparatus of claim 1, wherein the optical receiver includes a camera configured to capture a portion of the optical beam, and electronics with which the camera is configured to communicate to drive coarse or fine steering based on the captured portion of the optical beam to at least partially orient the optical receiver and optical transmitter. 11. The apparatus of claim 1, wherein the optical receiver includes a plurality of photodiodes positioned around a periphery of and that are shadowed by a limiting aperture of the optical receiver, the photodiodes being configured to detect relative powers of the optical beam, the optical receiver further including electronics with which the photodiodes are configured to communicate to drive coarse or fine steering based on the relative powers of the optical beam to at least partially orient the optical receiver and optical transmitter. 12. The apparatus of claim 1 further comprising a pointing system operable based on guidance provided by the optical receiver based on reception of a broad optical beam by a separate optical receiver or detector. 13. The apparatus of claim 1 further comprising one or more lenses of high-index material to increase an optical gain of the optical receiver. 14. The apparatus of claim 1, wherein the optical receiver being configured to detect the optical beam includes being configured to detect the optical beam that is time division multiplexed. 15. The apparatus of claim 14, wherein the optical receiver being configured to detect the optical beam that is time division multiplexed includes being configured to detect optical beams that are time division multiplexed and emitted from a plurality of optical transmitters. 16. The apparatus of claim 1 further configured to change a data rate or modulation of the optical beam based on a signal to noise ratio (SNR) or data integrity of the optical beam. 17. The apparatus of claim 1, wherein the apparatus is embodied as a mobile device equipped with the demodulator and optical receiver. 18. The apparatus of claim 1 further configured to use forward error correction to control errors in the data recovered by the demodulator. 19. The apparatus of claim 1 further configured to select the optical transmitter based on communication with one or more optical transmitters, and wherein the optical receiver includes electronics to drive coarse or fine steering to at least partially orient the optical receiver and the optical transmitter so selected. 20. The apparatus of claim 1, wherein the optical receiver includes an avalanche photodiode (APD) configured to detect the optical beam. 21. The apparatus of claim 1 further comprising pointing and tracking. 22. The apparatus of claim 21, wherein the pointing and tracking includes a pan-and-tilt control. 23. The apparatus of claim 1, wherein the optical receiver is configured to select the optical transmitter from a plurality of optical transmitters based on a characteristic of optical beams from the plurality of optical transmitters. 24. The apparatus of claim 1 further configured to decrypt the data recovered from the optical beam. 25. A method comprising: detecting, by an optical receiver, an optical beam emitted without artificial confinement from an optical transmitter configured to emit the optical beam modulated with data; and recovering the data from the optical beam so detected; wherein the optical beam detected by the optical receiver has been emitted from the optical transmitter with a divergence angle greater than 0.1 degrees, and with a photonic efficiency of less than 0.05%, the photonic efficiency relating a number of photons of the optical beam detectable by the optical receiver, to a number of photons of the optical beam emitted by the optical transmitter. 26. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam by the optical receiver that does not have a line-of-sight to the optical transmitter. 27. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam by the optical receiver of an array of optical receivers including the optical receiver, or by the optical receiver including an array of detectors, and wherein the method further comprises selectively activating or deactivating the optical receivers of the array of optical receivers or detectors of the array of detectors on their orientation with respect to the optical transmitter. 28. The method of claim 25, wherein the optical beam includes an incident beam, and a reflected beam produced by reflection of the incident beam, and detecting the optical beam includes preferentially detecting the reflected beam, and avoiding direct detection of the incident beam. 29. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam by the optical receiver including an array of detectors having a size larger than a spot size of the optical beam at the optical transmitter. 30. The method of claim 29, wherein the optical beam is spatially multiplexed to serve multiple optical receivers, the array of detectors being arranged in a pattern of detectors that corresponds to a pattern of emitters of the optical transmitter that are independently modulated. 31. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam that is spectral multiplexed to serve multiple optical receivers, and the method further comprises wavelength-specific or wavelength-tunable filtering the optical beam to enable the optical receiver to detect the optical beam. 32. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam with an adjustable focus to facilitate a match of the optical receiver to characteristics of the optical beam, the adjustable focus including focus of the optical receiver at some intermediate point between the optical transmitter and optical receiver. 33. The method of claim 25 further comprising receiving a heartbeat signal for orientation of the optical receiver and optical transmitter, the heartbeat signal being modulated to carry or indicate a location of the optical transmitter, or a signal to cause the optical receiver to return its location or an indication of its location to the optical transmitter. 34. The method of claim 25 further comprising capturing a portion of the optical beamby a camera; and driving coarse or fine steering based on the captured portion of the optical beam to at least partially orient the optical receiver and optical transmitter. 35. The method of claim 25, wherein the optical receiver includes a plurality of photodiodes positioned around a periphery of and that are shadowed by a limiting aperture of the optical receiver, and wherein the method further comprises: detecting relative powers of the optical beam by the photodiodes; and driving coarse or fine steering based on the relative powers of the optical beam to at least partially orient the optical receiver and optical transmitter. 36. The method of claim 25 further comprising operating a pointing system based on guidance provided by the optical receiver based on reception of a broad optical beam by a separate optical receiver or detector. 37. The method of claim 25 further comprising one or more lenses of high-index material increasing an optical gain of the optical receiver. 38. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam that is time division multiplexed. 39. The method of claim 38, wherein detecting the optical beam that is time division multiplexed includes detecting optical beams that are time division multiplexed and emitted from a plurality of optical transmitters. 40. The method of claim 25 further comprising changing a data rate or modulation of the optical beam based on a signal to noise ratio (SNR) or data integrity of the optical beam. 41. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam at a mobile device equipped with the optical receiver. 42. The method of claim 25 further comprising using forward error correction to control errors in the data recovered. 43. The method of claim 25 further comprising selecting the optical transmitter based on communication with one or more optical transmitters; and driving coarse or fine steering to at least partially orient the optical receiver and the optical transmitter so selected. 44. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam with an avalanche photodiode (APD). 45. The method of claim 25 further comprising pointing and tracking. 46. The method of claim 45, wherein the pointing and tracking includes a pan-and-tilt control. 47. The method of claim 25 further comprising selecting the optical transmitter from a plurality of optical transmitters based on a characteristic of optical beams from the plurality of optical transmitters. 48. The method of claim 25 further comprising decrypting the data recovered from the optical beam.
An apparatus is provided that includes a modulator and an optical transmitter coupled to the modulator and configured to emit an optical beam that the modulator is configured to modulate with data. The optical transmitter may thereby be configured to emit the optical beam carrying the data and without artificial confinement for receipt by an optical receiver configured to detect and recover the data from the optical beam. The optical transmitter may be configured to emit the optical beam with a divergence angle greater than 0.1 degrees, and with a photonic efficiency of less than 0.05%. The photonic efficiency may relate a number of photons of the optical beam detectable by the optical receiver, to a number of photons of the optical beam emitted by the optical transmitter.1. An apparatus comprising: a demodulator; and an optical receiver coupled to the demodulator and configured to detect an optical beam that carries data the demodulator is configured to recover, the optical receiver being configured to detect the optical beam emitted without artificial confinement from an optical transmitter configured to emit the optical beam modulated with the data, wherein the optical receiver is configured to detect the optical beam emitted with a divergence angle greater than 0.1 degrees, and with a photonic efficiency of less than 0.05%, the photonic efficiency relating a number of photons of the optical beam detectable by the optical receiver, to a number of photons of the optical beam emitted by the optical transmitter. 2. The apparatus of claim 1, wherein the optical receiver is configured to detect the optical beam at least in some instances in which the optical receiver does not have a line-of-sight to the optical transmitter. 3. The apparatus of claim 1 comprising an array of optical receivers including the optical receiver, or the optical receiver includes an array of detectors configured to detect the optical beam, and wherein optical receivers of the array of optical receivers or detectors of the array of detectors are configured to selectively activate and deactivate based on their orientation with respect to the optical transmitter. 4. The apparatus of claim 1, wherein the optical beam includes an incident beam, and a reflected beam produced by reflection of the incident beam, the optical receiver in at least one instance being configured to preferentially detect the reflected beam, and avoid direct detection of the incident beam. 5. The apparatus of claim 1, wherein the optical receiver includes an array of detectors configured to detect the optical beam, the array having a size larger than a spot size of the optical beam at the optical transmitter. 6. The apparatus of claim 5, wherein the optical beam is spatially multiplexed to serve multiple optical receivers, the array of detectors being arranged in a pattern of detectors that corresponds to a pattern of emitters of the optical transmitter that are independently modulated. 7. The apparatus of claim 1 further comprising a wavelength-specific or wavelength-tunable filter to enable the optical receiver to detect the optical beam that is spectral multiplexed to serve multiple optical receivers. 8. The apparatus of claim 1, wherein the optical receiver is configured to detect the optical beam with an adjustable focus to facilitate a match of the optical receiver to characteristics of the optical beam, the adjustable focus in at least one instance including focus of the optical receiver at some intermediate point between the optical transmitter and optical receiver. 9. The apparatus of claim 1 further configured to receive a heartbeat signal for orientation of the optical receiver and optical transmitter, the heartbeat signal being modulated to carry or indicate a location of the optical transmitter, or a signal to cause the optical receiver to return its location or an indication of its location to the optical transmitter. 10. The apparatus of claim 1, wherein the optical receiver includes a camera configured to capture a portion of the optical beam, and electronics with which the camera is configured to communicate to drive coarse or fine steering based on the captured portion of the optical beam to at least partially orient the optical receiver and optical transmitter. 11. The apparatus of claim 1, wherein the optical receiver includes a plurality of photodiodes positioned around a periphery of and that are shadowed by a limiting aperture of the optical receiver, the photodiodes being configured to detect relative powers of the optical beam, the optical receiver further including electronics with which the photodiodes are configured to communicate to drive coarse or fine steering based on the relative powers of the optical beam to at least partially orient the optical receiver and optical transmitter. 12. The apparatus of claim 1 further comprising a pointing system operable based on guidance provided by the optical receiver based on reception of a broad optical beam by a separate optical receiver or detector. 13. The apparatus of claim 1 further comprising one or more lenses of high-index material to increase an optical gain of the optical receiver. 14. The apparatus of claim 1, wherein the optical receiver being configured to detect the optical beam includes being configured to detect the optical beam that is time division multiplexed. 15. The apparatus of claim 14, wherein the optical receiver being configured to detect the optical beam that is time division multiplexed includes being configured to detect optical beams that are time division multiplexed and emitted from a plurality of optical transmitters. 16. The apparatus of claim 1 further configured to change a data rate or modulation of the optical beam based on a signal to noise ratio (SNR) or data integrity of the optical beam. 17. The apparatus of claim 1, wherein the apparatus is embodied as a mobile device equipped with the demodulator and optical receiver. 18. The apparatus of claim 1 further configured to use forward error correction to control errors in the data recovered by the demodulator. 19. The apparatus of claim 1 further configured to select the optical transmitter based on communication with one or more optical transmitters, and wherein the optical receiver includes electronics to drive coarse or fine steering to at least partially orient the optical receiver and the optical transmitter so selected. 20. The apparatus of claim 1, wherein the optical receiver includes an avalanche photodiode (APD) configured to detect the optical beam. 21. The apparatus of claim 1 further comprising pointing and tracking. 22. The apparatus of claim 21, wherein the pointing and tracking includes a pan-and-tilt control. 23. The apparatus of claim 1, wherein the optical receiver is configured to select the optical transmitter from a plurality of optical transmitters based on a characteristic of optical beams from the plurality of optical transmitters. 24. The apparatus of claim 1 further configured to decrypt the data recovered from the optical beam. 25. A method comprising: detecting, by an optical receiver, an optical beam emitted without artificial confinement from an optical transmitter configured to emit the optical beam modulated with data; and recovering the data from the optical beam so detected; wherein the optical beam detected by the optical receiver has been emitted from the optical transmitter with a divergence angle greater than 0.1 degrees, and with a photonic efficiency of less than 0.05%, the photonic efficiency relating a number of photons of the optical beam detectable by the optical receiver, to a number of photons of the optical beam emitted by the optical transmitter. 26. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam by the optical receiver that does not have a line-of-sight to the optical transmitter. 27. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam by the optical receiver of an array of optical receivers including the optical receiver, or by the optical receiver including an array of detectors, and wherein the method further comprises selectively activating or deactivating the optical receivers of the array of optical receivers or detectors of the array of detectors on their orientation with respect to the optical transmitter. 28. The method of claim 25, wherein the optical beam includes an incident beam, and a reflected beam produced by reflection of the incident beam, and detecting the optical beam includes preferentially detecting the reflected beam, and avoiding direct detection of the incident beam. 29. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam by the optical receiver including an array of detectors having a size larger than a spot size of the optical beam at the optical transmitter. 30. The method of claim 29, wherein the optical beam is spatially multiplexed to serve multiple optical receivers, the array of detectors being arranged in a pattern of detectors that corresponds to a pattern of emitters of the optical transmitter that are independently modulated. 31. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam that is spectral multiplexed to serve multiple optical receivers, and the method further comprises wavelength-specific or wavelength-tunable filtering the optical beam to enable the optical receiver to detect the optical beam. 32. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam with an adjustable focus to facilitate a match of the optical receiver to characteristics of the optical beam, the adjustable focus including focus of the optical receiver at some intermediate point between the optical transmitter and optical receiver. 33. The method of claim 25 further comprising receiving a heartbeat signal for orientation of the optical receiver and optical transmitter, the heartbeat signal being modulated to carry or indicate a location of the optical transmitter, or a signal to cause the optical receiver to return its location or an indication of its location to the optical transmitter. 34. The method of claim 25 further comprising capturing a portion of the optical beamby a camera; and driving coarse or fine steering based on the captured portion of the optical beam to at least partially orient the optical receiver and optical transmitter. 35. The method of claim 25, wherein the optical receiver includes a plurality of photodiodes positioned around a periphery of and that are shadowed by a limiting aperture of the optical receiver, and wherein the method further comprises: detecting relative powers of the optical beam by the photodiodes; and driving coarse or fine steering based on the relative powers of the optical beam to at least partially orient the optical receiver and optical transmitter. 36. The method of claim 25 further comprising operating a pointing system based on guidance provided by the optical receiver based on reception of a broad optical beam by a separate optical receiver or detector. 37. The method of claim 25 further comprising one or more lenses of high-index material increasing an optical gain of the optical receiver. 38. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam that is time division multiplexed. 39. The method of claim 38, wherein detecting the optical beam that is time division multiplexed includes detecting optical beams that are time division multiplexed and emitted from a plurality of optical transmitters. 40. The method of claim 25 further comprising changing a data rate or modulation of the optical beam based on a signal to noise ratio (SNR) or data integrity of the optical beam. 41. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam at a mobile device equipped with the optical receiver. 42. The method of claim 25 further comprising using forward error correction to control errors in the data recovered. 43. The method of claim 25 further comprising selecting the optical transmitter based on communication with one or more optical transmitters; and driving coarse or fine steering to at least partially orient the optical receiver and the optical transmitter so selected. 44. The method of claim 25, wherein detecting the optical beam includes detecting the optical beam with an avalanche photodiode (APD). 45. The method of claim 25 further comprising pointing and tracking. 46. The method of claim 45, wherein the pointing and tracking includes a pan-and-tilt control. 47. The method of claim 25 further comprising selecting the optical transmitter from a plurality of optical transmitters based on a characteristic of optical beams from the plurality of optical transmitters. 48. The method of claim 25 further comprising decrypting the data recovered from the optical beam.
2,600
10,876
10,876
15,621,490
2,661
Methods are provided for automatically performing atmospheric compensation of a multi or hyper spectral image. One method comprises transforming at least two endmembers extracted from an image into at-ground reflectance. The transformation may be approximate and/or only in certain spectral bands in order to reduce processing time. A matching component is then located in a spectral library for each of the at least two extracted endmembers. Gain and offset values are then calculated using the at least two matched extracted endmember and spectral library component pairs. At least part of the image is then compensated using the calculated gain and offset values. Another method uses at least one endmember extracted from the image and a black level. Methods for atmospheric compensation using water vapor content of pixels are also provided. In addition, methods for shadow correction of hyper and multi spectral images are provided.
1-17. (canceled) 18. A method for use in shadow compensation of a multi or hyper spectral image, the method comprising: (a) identifying a brightest pixel in the image in term of its norm and assuming the pixel is fully lit by both the sun and the sky; (b) determining a direct flux (sun) and a diffuse flux (sky) of the pixel; (c) determining a sunlit only surface signature and a skylit only surface signature of the pixel using the direct and diffuse fluxes; (d) determining an abundance of each of the sunlit surface and skylit surface signatures in pixels of the image, and removing the sunlit surface and skylit surface signatures from pixels of the image; (e) repeating (a) through (d) a predetermined number of times or until the norm of the pixel last identified in (a) is below a predetermined threshold value, where the pixel identified in (a) in the next iteration is the next brightest pixel in the image after the previous iteration sunlit and skylit only surface signatures have been removed; and (f) compensating at least some pixels of the image, where a given pixel is compensated using the direct and diffuse fluxes of the given pixel as well as a sun illumination factor and a sky illumination factor of the given pixel, the sun illumination factor being the sum of the abundances of the sunlit surface signatures and the sky illumination factor being a sum of the abundances of the skylit surface signatures of all pixels selected in the iterative process for the given pixel. 19. The method of claim 18, wherein the compensating of a given pixel in (f) involves multiplying the given pixel by a value proportional to: ( I dif + I diff f sun  I dir + f sky  I diff ) where Idir is the direct flux, Idiff is the diffuse flux, fsun is the solar illumination factor and fsky is the sky illumination factor of the given pixel. 20. The method of claim 18, wherein the determining of the skylit only surface signature involves multiplying a fully lit pixel signature, ρ, by a value proportional to: ( I diff I dir + I diff ) where Idir is the direct flux and Idiff is the diffuse flux, and wherein the determining of the sunlit only surface signature involves multiplying the fully lit pixel signature, ρ, by a value proportional to: ( 1 - I diff I dir + I diff ) . 21. A non-transitory computer-readable storage medium comprising instructions for execution on one or more electronic devices, the instructions for a method for use in shadow compensation of a multi or hyper spectral image, the method comprising: (a) identifying a brightest pixel in the image in term of its norm and assuming the pixel is fully lit by both the sun and the sky; (b) determining a direct flux (sun) and a diffuse flux (sky) of the pixel; (c) determining a sunlit only surface signature and a skylit only surface signature of the pixel using the direct and diffuse fluxes; (d) determining an abundance of each of the sunlit surface and skylit surface signatures in pixels of the image, and removing the sunlit surface and skylit surface signatures from pixels of the image; (e) repeating (a) through (d) a predetermined number of times or until the norm of the pixel last identified in (a) is below a predetermined threshold value, where the pixel identified in (a) in the next iteration is the next brightest pixel in the image after the previous iteration sunlit and skylit only surface signatures have been removed; and (f) compensating at least some pixels of the image, where a given pixel is compensated using the direct and diffuse fluxes of the given pixel as well as a sun illumination factor and a sky illumination factor of the given pixel, the sun illumination factor being the sum of the abundances of the sunlit surface signatures and the sky illumination factor being a sum of the abundances of the skylit surface signatures of all pixels selected in the iterative process for the given pixel. 22. The non-transitory computer readable medium of claim 21, wherein the compensating of a given pixel in (f) involves multiplying the given pixel by a value proportional to: ( I dir + I diff f sun  I dir + f sky  I diff ) where Idir is the direct flux, Idiff is the diffuse flux, fsun is the solar illumination factor and fsky is the sky illumination factor of the given pixel. 23. The non-transitory computer readable medium of claim 21, wherein the determining of the skylit only surface signature involves multiplying a fully lit pixel signature, ρ, by a value proportional to: ( I diff I dir + I diff ) where Idir is the direct flux and Idiff is the diffuse flux, and wherein the determining of the sunlit only surface signature involves multiplying the fully lit pixel signature, ρ, by a value proportional to: ( 1 - I diff I dir + I diff ) .
Methods are provided for automatically performing atmospheric compensation of a multi or hyper spectral image. One method comprises transforming at least two endmembers extracted from an image into at-ground reflectance. The transformation may be approximate and/or only in certain spectral bands in order to reduce processing time. A matching component is then located in a spectral library for each of the at least two extracted endmembers. Gain and offset values are then calculated using the at least two matched extracted endmember and spectral library component pairs. At least part of the image is then compensated using the calculated gain and offset values. Another method uses at least one endmember extracted from the image and a black level. Methods for atmospheric compensation using water vapor content of pixels are also provided. In addition, methods for shadow correction of hyper and multi spectral images are provided.1-17. (canceled) 18. A method for use in shadow compensation of a multi or hyper spectral image, the method comprising: (a) identifying a brightest pixel in the image in term of its norm and assuming the pixel is fully lit by both the sun and the sky; (b) determining a direct flux (sun) and a diffuse flux (sky) of the pixel; (c) determining a sunlit only surface signature and a skylit only surface signature of the pixel using the direct and diffuse fluxes; (d) determining an abundance of each of the sunlit surface and skylit surface signatures in pixels of the image, and removing the sunlit surface and skylit surface signatures from pixels of the image; (e) repeating (a) through (d) a predetermined number of times or until the norm of the pixel last identified in (a) is below a predetermined threshold value, where the pixel identified in (a) in the next iteration is the next brightest pixel in the image after the previous iteration sunlit and skylit only surface signatures have been removed; and (f) compensating at least some pixels of the image, where a given pixel is compensated using the direct and diffuse fluxes of the given pixel as well as a sun illumination factor and a sky illumination factor of the given pixel, the sun illumination factor being the sum of the abundances of the sunlit surface signatures and the sky illumination factor being a sum of the abundances of the skylit surface signatures of all pixels selected in the iterative process for the given pixel. 19. The method of claim 18, wherein the compensating of a given pixel in (f) involves multiplying the given pixel by a value proportional to: ( I dif + I diff f sun  I dir + f sky  I diff ) where Idir is the direct flux, Idiff is the diffuse flux, fsun is the solar illumination factor and fsky is the sky illumination factor of the given pixel. 20. The method of claim 18, wherein the determining of the skylit only surface signature involves multiplying a fully lit pixel signature, ρ, by a value proportional to: ( I diff I dir + I diff ) where Idir is the direct flux and Idiff is the diffuse flux, and wherein the determining of the sunlit only surface signature involves multiplying the fully lit pixel signature, ρ, by a value proportional to: ( 1 - I diff I dir + I diff ) . 21. A non-transitory computer-readable storage medium comprising instructions for execution on one or more electronic devices, the instructions for a method for use in shadow compensation of a multi or hyper spectral image, the method comprising: (a) identifying a brightest pixel in the image in term of its norm and assuming the pixel is fully lit by both the sun and the sky; (b) determining a direct flux (sun) and a diffuse flux (sky) of the pixel; (c) determining a sunlit only surface signature and a skylit only surface signature of the pixel using the direct and diffuse fluxes; (d) determining an abundance of each of the sunlit surface and skylit surface signatures in pixels of the image, and removing the sunlit surface and skylit surface signatures from pixels of the image; (e) repeating (a) through (d) a predetermined number of times or until the norm of the pixel last identified in (a) is below a predetermined threshold value, where the pixel identified in (a) in the next iteration is the next brightest pixel in the image after the previous iteration sunlit and skylit only surface signatures have been removed; and (f) compensating at least some pixels of the image, where a given pixel is compensated using the direct and diffuse fluxes of the given pixel as well as a sun illumination factor and a sky illumination factor of the given pixel, the sun illumination factor being the sum of the abundances of the sunlit surface signatures and the sky illumination factor being a sum of the abundances of the skylit surface signatures of all pixels selected in the iterative process for the given pixel. 22. The non-transitory computer readable medium of claim 21, wherein the compensating of a given pixel in (f) involves multiplying the given pixel by a value proportional to: ( I dir + I diff f sun  I dir + f sky  I diff ) where Idir is the direct flux, Idiff is the diffuse flux, fsun is the solar illumination factor and fsky is the sky illumination factor of the given pixel. 23. The non-transitory computer readable medium of claim 21, wherein the determining of the skylit only surface signature involves multiplying a fully lit pixel signature, ρ, by a value proportional to: ( I diff I dir + I diff ) where Idir is the direct flux and Idiff is the diffuse flux, and wherein the determining of the sunlit only surface signature involves multiplying the fully lit pixel signature, ρ, by a value proportional to: ( 1 - I diff I dir + I diff ) .
2,600
10,877
10,877
15,276,786
2,647
A system and method of supporting access to content over first and second networks that allows a user to access content over different networks, either on the same device or with different devices. The access may be supported in a continuous or seamless matter without substantially interrupting access to the content, such as by instigating the transition during a period of time when minimal, if any content, or content of value, is likely to be missed.
1. A method of supporting seamless transmission of a particular piece of content to a mobile device that travels from a first access network to a second access network while the content is being transmitted, wherein the particular piece of content is transmitted over the first access network to a first multicast group according to a first codec and over the second access network to a second multicast group according to a second codec, the method comprising: transmitting instructions to the mobile device that allow the mobile device to seamlessly access the content as the mobile device travels from the first access network to the second access network, wherein the instructions instruct the mobile device to locate the second multicast group assigned to the content and to process the content for output according to requirements of the second codec once the mobile device is no longer accessing the content over the first network. 2. The method of claim 1 further comprising instructing the mobile device to locate and process content based on information collected while tracking content carried over at least three access networks that simultaneously transmit the content, wherein the first and second access networks are two of the at least three networks. 3. The method of claim 2 further comprising collecting information that identifies, if available, multicast groups and codecs used by each access network to support transmission of the content. 4. The method of claim 2 further comprising limiting the information tracking to content transmitted to multicast groups, the multicast groups having some number of members selected from a total number of members that receive other content indiscriminately broadcasted over each access network. 5. The method of claim 1 further comprising determining the mobile device to be traveling from the first to the second access network upon receipt of a message from the mobile device that indicates a termination in transmission over the first network of the piece of content. 6. The method of claim 1 further comprising generating the instructions after receipt of transition request is transmitted from the mobile device, the transition request requesting access to the content on the second access network. 7. The method of claim 6 further comprising denying the transition request if the requested content is transmitting over the second network and the requesting user lacks access authorization. 8. For use in a system having first and second networks that transmit content over broadcast tiers and multicast tiers, the broadcast tiers indiscriminately transmitting the content to a number of users and the multicast tiers discriminately transmitting the content to some number of users less than the number of users broadcasted content, a method that supports continuous access to multicasted content when a user transitions from the first to the second network comprising: determining a number of multicast groups operating on the multicast tiers of the first and second networks; determining content being simultaneously multicasted on both of the first and second networks; identifying multicasted content being accessed by the user; if the identified content is being simultaneously multicasted over the second network, transmitting instructions that transition user access to the identified content from the first network to the second network, the instructions including information sufficient to locate the multicast group of the second network also identified to be multicasting the identified content. 9. The method of claim 8 further comprising identifying codecs used by each of the number of multicast groups and including codec instructions within the transmitted instructions that allow a device used by the user to process the identified content according to the codec used by the second network. 10. The method of claim 8 further comprises identifying the identified content to correspond with content showing on a channel tuned to by a device that descrambles television signals. 11. The method of claim 8 further comprising identifying the need to transition user access to the identified content in response to receiving an out of range signal from a device accessing the identified content over the first network, the out of range signal indicating the device is approaching a boundary limit for receiving the identified content over the first network. 12. A method of supporting continuous access to content when access to the content is transitioned from a first network to a second network, the method comprising: identifying whether the content being accessed over the first network is also available over the second network; if the content is also available over the second network, identifying whether the content is available over a broadcast or multicast tier of the second network; if the content is available over the broadcast tier, generating transition instructions dependent on a static channel map used by the broadcast tier to instruct access to the content on the second network; and if the content is available over the multicast tier, generating transition instructions dependent on a current multicast group used by the multicast tier to instruct access to the content on the second network. 13. The method of claim 12 further comprising preventing transmission of the transition instructions to a device used to access the content over the second network if the device is not authorized to access the content over the second network. 14. The method of claim 13 further comprising determining the device is not authorized to receive the content if transmission restraints on the content prevent the content from being accessed within a geographical area associated with the second network. 15. The method of claim 12 further comprising, if the static map is pre-loaded onto a device used to access the content, instructing the device to generate the transition instructions from the static channel map. 16. The method of claim 12 further comprising transmitting the transition instructions from a transition manager that tracks content available over the first and second networks. 17. The method of claim 16 further comprising the transition manager tracking codecs used by the first and second network to support transmission of the content and including codec support instructions in the transmitted transition instructions. 18. The method of claim 12 further comprising transmitting the transition instructions to a device used to access the content over the first network and before the device begins receiving signals over the second network. 19. The method of claim 12 further comprising transmitting the transition instructions to support transitioning access from the first network being a wireline network to the second network being a wireless network. 20. The method of claim 12 further comprising transmitting the transition instructions to support transitioning of a mobile device from the wireline network being a cable television network to the wireless network being a wireless phone network.
A system and method of supporting access to content over first and second networks that allows a user to access content over different networks, either on the same device or with different devices. The access may be supported in a continuous or seamless matter without substantially interrupting access to the content, such as by instigating the transition during a period of time when minimal, if any content, or content of value, is likely to be missed.1. A method of supporting seamless transmission of a particular piece of content to a mobile device that travels from a first access network to a second access network while the content is being transmitted, wherein the particular piece of content is transmitted over the first access network to a first multicast group according to a first codec and over the second access network to a second multicast group according to a second codec, the method comprising: transmitting instructions to the mobile device that allow the mobile device to seamlessly access the content as the mobile device travels from the first access network to the second access network, wherein the instructions instruct the mobile device to locate the second multicast group assigned to the content and to process the content for output according to requirements of the second codec once the mobile device is no longer accessing the content over the first network. 2. The method of claim 1 further comprising instructing the mobile device to locate and process content based on information collected while tracking content carried over at least three access networks that simultaneously transmit the content, wherein the first and second access networks are two of the at least three networks. 3. The method of claim 2 further comprising collecting information that identifies, if available, multicast groups and codecs used by each access network to support transmission of the content. 4. The method of claim 2 further comprising limiting the information tracking to content transmitted to multicast groups, the multicast groups having some number of members selected from a total number of members that receive other content indiscriminately broadcasted over each access network. 5. The method of claim 1 further comprising determining the mobile device to be traveling from the first to the second access network upon receipt of a message from the mobile device that indicates a termination in transmission over the first network of the piece of content. 6. The method of claim 1 further comprising generating the instructions after receipt of transition request is transmitted from the mobile device, the transition request requesting access to the content on the second access network. 7. The method of claim 6 further comprising denying the transition request if the requested content is transmitting over the second network and the requesting user lacks access authorization. 8. For use in a system having first and second networks that transmit content over broadcast tiers and multicast tiers, the broadcast tiers indiscriminately transmitting the content to a number of users and the multicast tiers discriminately transmitting the content to some number of users less than the number of users broadcasted content, a method that supports continuous access to multicasted content when a user transitions from the first to the second network comprising: determining a number of multicast groups operating on the multicast tiers of the first and second networks; determining content being simultaneously multicasted on both of the first and second networks; identifying multicasted content being accessed by the user; if the identified content is being simultaneously multicasted over the second network, transmitting instructions that transition user access to the identified content from the first network to the second network, the instructions including information sufficient to locate the multicast group of the second network also identified to be multicasting the identified content. 9. The method of claim 8 further comprising identifying codecs used by each of the number of multicast groups and including codec instructions within the transmitted instructions that allow a device used by the user to process the identified content according to the codec used by the second network. 10. The method of claim 8 further comprises identifying the identified content to correspond with content showing on a channel tuned to by a device that descrambles television signals. 11. The method of claim 8 further comprising identifying the need to transition user access to the identified content in response to receiving an out of range signal from a device accessing the identified content over the first network, the out of range signal indicating the device is approaching a boundary limit for receiving the identified content over the first network. 12. A method of supporting continuous access to content when access to the content is transitioned from a first network to a second network, the method comprising: identifying whether the content being accessed over the first network is also available over the second network; if the content is also available over the second network, identifying whether the content is available over a broadcast or multicast tier of the second network; if the content is available over the broadcast tier, generating transition instructions dependent on a static channel map used by the broadcast tier to instruct access to the content on the second network; and if the content is available over the multicast tier, generating transition instructions dependent on a current multicast group used by the multicast tier to instruct access to the content on the second network. 13. The method of claim 12 further comprising preventing transmission of the transition instructions to a device used to access the content over the second network if the device is not authorized to access the content over the second network. 14. The method of claim 13 further comprising determining the device is not authorized to receive the content if transmission restraints on the content prevent the content from being accessed within a geographical area associated with the second network. 15. The method of claim 12 further comprising, if the static map is pre-loaded onto a device used to access the content, instructing the device to generate the transition instructions from the static channel map. 16. The method of claim 12 further comprising transmitting the transition instructions from a transition manager that tracks content available over the first and second networks. 17. The method of claim 16 further comprising the transition manager tracking codecs used by the first and second network to support transmission of the content and including codec support instructions in the transmitted transition instructions. 18. The method of claim 12 further comprising transmitting the transition instructions to a device used to access the content over the first network and before the device begins receiving signals over the second network. 19. The method of claim 12 further comprising transmitting the transition instructions to support transitioning access from the first network being a wireline network to the second network being a wireless network. 20. The method of claim 12 further comprising transmitting the transition instructions to support transitioning of a mobile device from the wireline network being a cable television network to the wireless network being a wireless phone network.
2,600
10,878
10,878
15,112,281
2,647
A method, computer program, network control node, user equipment and base station are disclosed which allow a wireless communication network to support different types of user equipment which have particular signalling requirements. In particular, low complexity devices that require signals having low transport block sizes and those that require a coverage enhanced mode where messages are repeated are supported. Information regarding their particular capabilities are transmitted to and stored in the network control node which then transmits this information as paging information with any paging request.
1. A method of sending paging instructions to a user equipment from a network control node of a wireless communication network, radio coverage for said wireless communication network being provided in a plurality of cells by a plurality of base stations: receiving at said network control node from at least one of said base stations, user equipment capability information comprising at least one of: low complexity identification information identifying said user equipment as a low complexity user equipment; an indication from said user equipment as to whether said user equipment supports a coverage enhanced mode, said coverage enhanced mode utilising repetition of some messaging to increase coverage; and an indication as to whether said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode; storing said at least one of said low complexity user equipment information and said coverage enhanced mode indications as paging property information for said user equipment at said network control node; transmitting said paging property information to at least one of said base stations when instructing paging of said user equipment that is currently in idle mode. 2. A method according to claim 1, comprising receiving both of said low complexity identification information and said coverage enhanced mode indication and storing said low complexity identification information and said coverage enhanced mode indication as said paging property information. 3. A method according to claim 1, wherein at least some of said paging property information is received at said network control node as part of a radio resource control RRC signal transmitted from said user equipment and sent to said network control node via said base station. 4. A method according to claim 1, wherein said user equipment capability information comprises other user equipment capability information in addition to said paging property information, said paging property information being received in a different container to said other user equipment capability information and said step of storing comprises storing said paging property information separately to said other user equipment capability information. 5. A network control node for a wireless communication network radio coverage for said wireless communication network being provided in a plurality of cells by a plurality of base stations, said network control node being operable to send paging instructions to a user equipment and comprising: a receiver operable to receive from one of said base stations, user equipment capability information comprising at least one of: low complexity identification information identifying said user equipment as a low complexity user equipment; an indication from said user equipment as to whether said user equipment supports a coverage enhanced mode, said coverage enhanced mode utilising repetition of some messaging to increase coverage; and an indication as to whether said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode; a data store operable to store said received at least one of said low complexity user equipment information and said coverage enhanced mode indications as paging property information for said user equipment; a transmitter operable to transmit said paging property information to at least one of said base stations when instructing paging of said user equipment that is currently in idle mode. 6. A method performed by a low complexity user equipment communicating with a wireless network, said user equipment being configured to transmit user equipment capability information to a base station on connection to said network as part of radio resource control signalling, said method comprising transmitting separate capability containers, one of said capability containers comprising at least one of low complexity identification information identifying said user equipment as a low complexity user equipment; an indication as to whether said user equipment supports a coverage enhanced mode. 7. A low complexity user equipment for communicating with a wireless network, said low complexity user equipment being configured to transmit to a base station on connection to said network, user equipment capability information as part of radio resource control signalling, said low complexity user equipment being configured to transmit said user equipment capability information in separate containers one of said containers comprising at least one of low complexity identification information identifying said user equipment as a low complexity user equipment; an indication as to whether said user equipment supports a coverage enhanced mode. 8. A method performed by a base station of receiving user equipment capability information; and transmitting at least a part of said received user equipment capability information to a network control node as paging property information, said paging property information comprising at least one of low complexity identification information identifying a user equipment as a low complexity user equipment and an indication indicating whether said user equipment supports a coverage enhanced mode and mode information indicating if said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode. 9. A method according to claim 8, wherein said user equipment capability information comprises other user equipment capability information in addition to said paging property information, said method further comprising: transmitting said paging information in a different container to said other user equipment capability information to said network control node. 10. A method according to claim 9, comprising extracting said user equipment paging information from said user equipment capability information prior to transmitting said paging information to said network control node. 11. A method according to claim 8, further comprising receiving paging instructions from a network control node, said paging instructions instructing paging of a user equipment and comprising said paging property information indicating said user equipment to be at least one of a low complexity user equipment and a user equipment that supports a coverage enhanced mode and is currently in an area requiring said coverage enhanced mode; in response to said received paging instructions transmitting a paging message to said user equipment; and where said paging property information indicates said user equipment to be low complexity user equipment, transmitting said paging message in transport blocks of less than a predetermined size; and where said paging property information indicates said user equipment to support a coverage enhanced mode and be in an area requiring said coverage enhanced mode, repeatedly transmitting said paging message to said user equipment. 12. A base station configured to: receive user capability information from a user equipment; and transmit to a network control node at least a part of said received user equipment capability information as paging property information, said paging property information comprising at least one of low complexity identification information identifying a user equipment as a low complexity user equipment and an indication indicating whether said user equipment supports a coverage enhanced mode and mode information indicating if said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode. 13. A base station according to claim 12, said base station being configured to receive other user equipment capability information in addition to said paging property information and to transmit said paging information in a different container to said other user equipment capability information to said network control node. 14. A base station according to claim 12, said base station being configured to receive paging instructions from a network control node, said paging instructions instructing paging of a user equipment and comprising said paging property information indicating said user equipment to be at least one of a low complexity user equipment and a user equipment that supports a coverage enhanced mode and is currently in an area requiring said coverage enhanced mode; said base station being configured to respond to said received paging instructions by transmitting a paging message to said user equipment; and where said paging property information indicates said user equipment to be low complexity user equipment said paging message being transmitted in transport blocks of less than a predetermined size; and where said paging property information indicates said user equipment to support a coverage enhanced mode and be in an area requiring said coverage enhanced mode to repeatedly transmit said paging message to said user equipment. 15. A computer program which when executed by a data processing apparatus controls said data processing apparatus to perform a method according to claim 1.
A method, computer program, network control node, user equipment and base station are disclosed which allow a wireless communication network to support different types of user equipment which have particular signalling requirements. In particular, low complexity devices that require signals having low transport block sizes and those that require a coverage enhanced mode where messages are repeated are supported. Information regarding their particular capabilities are transmitted to and stored in the network control node which then transmits this information as paging information with any paging request.1. A method of sending paging instructions to a user equipment from a network control node of a wireless communication network, radio coverage for said wireless communication network being provided in a plurality of cells by a plurality of base stations: receiving at said network control node from at least one of said base stations, user equipment capability information comprising at least one of: low complexity identification information identifying said user equipment as a low complexity user equipment; an indication from said user equipment as to whether said user equipment supports a coverage enhanced mode, said coverage enhanced mode utilising repetition of some messaging to increase coverage; and an indication as to whether said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode; storing said at least one of said low complexity user equipment information and said coverage enhanced mode indications as paging property information for said user equipment at said network control node; transmitting said paging property information to at least one of said base stations when instructing paging of said user equipment that is currently in idle mode. 2. A method according to claim 1, comprising receiving both of said low complexity identification information and said coverage enhanced mode indication and storing said low complexity identification information and said coverage enhanced mode indication as said paging property information. 3. A method according to claim 1, wherein at least some of said paging property information is received at said network control node as part of a radio resource control RRC signal transmitted from said user equipment and sent to said network control node via said base station. 4. A method according to claim 1, wherein said user equipment capability information comprises other user equipment capability information in addition to said paging property information, said paging property information being received in a different container to said other user equipment capability information and said step of storing comprises storing said paging property information separately to said other user equipment capability information. 5. A network control node for a wireless communication network radio coverage for said wireless communication network being provided in a plurality of cells by a plurality of base stations, said network control node being operable to send paging instructions to a user equipment and comprising: a receiver operable to receive from one of said base stations, user equipment capability information comprising at least one of: low complexity identification information identifying said user equipment as a low complexity user equipment; an indication from said user equipment as to whether said user equipment supports a coverage enhanced mode, said coverage enhanced mode utilising repetition of some messaging to increase coverage; and an indication as to whether said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode; a data store operable to store said received at least one of said low complexity user equipment information and said coverage enhanced mode indications as paging property information for said user equipment; a transmitter operable to transmit said paging property information to at least one of said base stations when instructing paging of said user equipment that is currently in idle mode. 6. A method performed by a low complexity user equipment communicating with a wireless network, said user equipment being configured to transmit user equipment capability information to a base station on connection to said network as part of radio resource control signalling, said method comprising transmitting separate capability containers, one of said capability containers comprising at least one of low complexity identification information identifying said user equipment as a low complexity user equipment; an indication as to whether said user equipment supports a coverage enhanced mode. 7. A low complexity user equipment for communicating with a wireless network, said low complexity user equipment being configured to transmit to a base station on connection to said network, user equipment capability information as part of radio resource control signalling, said low complexity user equipment being configured to transmit said user equipment capability information in separate containers one of said containers comprising at least one of low complexity identification information identifying said user equipment as a low complexity user equipment; an indication as to whether said user equipment supports a coverage enhanced mode. 8. A method performed by a base station of receiving user equipment capability information; and transmitting at least a part of said received user equipment capability information to a network control node as paging property information, said paging property information comprising at least one of low complexity identification information identifying a user equipment as a low complexity user equipment and an indication indicating whether said user equipment supports a coverage enhanced mode and mode information indicating if said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode. 9. A method according to claim 8, wherein said user equipment capability information comprises other user equipment capability information in addition to said paging property information, said method further comprising: transmitting said paging information in a different container to said other user equipment capability information to said network control node. 10. A method according to claim 9, comprising extracting said user equipment paging information from said user equipment capability information prior to transmitting said paging information to said network control node. 11. A method according to claim 8, further comprising receiving paging instructions from a network control node, said paging instructions instructing paging of a user equipment and comprising said paging property information indicating said user equipment to be at least one of a low complexity user equipment and a user equipment that supports a coverage enhanced mode and is currently in an area requiring said coverage enhanced mode; in response to said received paging instructions transmitting a paging message to said user equipment; and where said paging property information indicates said user equipment to be low complexity user equipment, transmitting said paging message in transport blocks of less than a predetermined size; and where said paging property information indicates said user equipment to support a coverage enhanced mode and be in an area requiring said coverage enhanced mode, repeatedly transmitting said paging message to said user equipment. 12. A base station configured to: receive user capability information from a user equipment; and transmit to a network control node at least a part of said received user equipment capability information as paging property information, said paging property information comprising at least one of low complexity identification information identifying a user equipment as a low complexity user equipment and an indication indicating whether said user equipment supports a coverage enhanced mode and mode information indicating if said user equipment supporting said coverage enhanced mode is currently in an area requiring said coverage enhanced mode. 13. A base station according to claim 12, said base station being configured to receive other user equipment capability information in addition to said paging property information and to transmit said paging information in a different container to said other user equipment capability information to said network control node. 14. A base station according to claim 12, said base station being configured to receive paging instructions from a network control node, said paging instructions instructing paging of a user equipment and comprising said paging property information indicating said user equipment to be at least one of a low complexity user equipment and a user equipment that supports a coverage enhanced mode and is currently in an area requiring said coverage enhanced mode; said base station being configured to respond to said received paging instructions by transmitting a paging message to said user equipment; and where said paging property information indicates said user equipment to be low complexity user equipment said paging message being transmitted in transport blocks of less than a predetermined size; and where said paging property information indicates said user equipment to support a coverage enhanced mode and be in an area requiring said coverage enhanced mode to repeatedly transmit said paging message to said user equipment. 15. A computer program which when executed by a data processing apparatus controls said data processing apparatus to perform a method according to claim 1.
2,600
10,879
10,879
15,669,430
2,644
Disclosed are a signaling optimization method and device, to resolve a problem of heavy signaling overheads and long data transmission delay when a user equipment (UE) accesses a network side. The method includes: receiving, by the UE, configuration information sent by a first network side device, where the configuration information includes a list, and the list is a cell list or a base station list; and entering, by the UE, an intermediate state according to the configuration information, where the intermediate state means that: when the UE stores context information of the UE, if the UE moves and a cell movement range falls within a coverage area of a cell or a base station included in the list, the UE performs cell reselection according to the list. Embodiments of the present disclosure are applicable to optimization of signaling transmission between the UE and a network side.
1. A user equipment (UE), comprising: a receiver, configured to receive configuration information sent by a first network side device, wherein the configuration information comprises a list, and the list is a cell list or a base station list; and a processor, configured to cause the UE enter an intermediate state according to the configuration information, wherein the intermediate state is a state such that when the UE moves and a cell movement range falls within a coverage area of a cell or a base station comprised in the list, the UE performs cell reselection according to the list. 2. The UE according to claim 1, wherein when the UE is in the intermediate state, if the cell movement range of the UE falls beyond the coverage area of the cell or the base station comprised in the list, then the UE sends a notification message to a network side device on which a current serving cell of the UE is located, so that the UE is restored from the intermediate state to a connection state or returns to an idle state, and the network side device on which the current serving cell is located is the first network side device or a second network side device after the UE performs cell reselection. 3. The UE according to claim 1, wherein the configuration information further comprises one or more of: a condition under which the first network side device instructs the UE to enter the intermediate state according to the configuration information, and the condition comprises entering the intermediate state immediately or entering an idle state after a preset time; a time period in which the first network side device instructs the UE to enter the intermediate state according to the configuration information; and an operation instruction performed after the UE enters the intermediate state during the time period according to the configuration information, wherein the operation instruction is used to instruct the UE to enter the idle state and/or instruct the UE to notify the first network side device of current location information of the UE. 4. The UE according to claim 1, wherein the configuration information further comprises a cell reselection parameter for performing cell reselection by the UE. 5. The UE according to claim 1, wherein the configuration information is carried in a radio resource control (RRC) message by the first network side device, and the RRC message comprises an RRC connection establishment message, an RRC reconfiguration message, or an RRC connection release message. 6. The UE according to claim 1, wherein in a process in which the processor is configured to perform cell reselection, when it is determined that the UE needs to send uplink data to a network side from the network side, the processor is further configured to restore the UE to the connection state; and wherein the UE further comprises: a transmitter, configured to send the uplink data to a network side device that has been restored to the connection state, wherein the network side device that has been restored to the connection state comprises the first network side device or the second network side device. 7. The UE according to claim 6, wherein the transmitter is further configured to: send a scheduling request or a random access request to the first network side device when the cell of the UE does not change, to restore the UE to the connection state. 8. The UE according to claim 6, wherein: the transmitter is further configured to send a radio resource control (RRC) resume request message to the first network side device or the second network side device, wherein the RRC resume request message comprises at least one of cell information of the first network side device, a cell radio network temporary identity (C-RNTI) of the UE, or indication information for requesting to restore the UE to the connection state; the receiver is further configured to receive an RRC resume confirmation message sent by the first network side device or the second network side device, wherein the RRC resume confirmation message is used to indicate that the UE is restored to the connection state; and the transmitter is further configured to send an RRC connection resume completion message to the first network side device or the second network side device. 9. A first network side device, comprising: a transmitter, configured to send configuration information to a user equipment (UE), wherein the configuration information comprises a list, wherein the list is a cell list or a base station list, wherein the configuration information is used to instruct the UE to enter an intermediate state, and wherein the intermediate state is a state that, when the UE stores context information of the UE, if the UE moves and a cell movement range falls within a coverage area of a cell or a base station comprised in the list, then the UE performs cell reselection according to the list, wherein the transmitter is further configured to send the context information of the UE to a network side device in the list. 10. The first network side device according to claim 9, wherein the configuration information is further used to indicate that: after the UE enters the intermediate state, if the UE moves and the cell movement range falls beyond the coverage area of the cell or the base station comprised in the list, the UE sends a notification message to a network side device on which a current serving cell of the UE is located, so that the UE is restored from the intermediate state to a connection state or returns to an idle state, and the network side device on which the current serving cell is located is the first network side device or a second network side device after the UE performs cell reselection. 11. The first network side device according to claim 9, wherein the configuration information further comprises one or more of: a condition under which the first network side device instructs the UE to enter the intermediate state according to the configuration information, and the condition comprises entering the intermediate state immediately or entering the intermediate state after a preset time; a time period in which the first network side device instructs the UE to enter the intermediate state according to the configuration information; and an operation instruction performed after the UE enters the intermediate state during the time period according to the configuration information, and the operation instruction is used to instruct the UE to enter an idle state and/or instruct the UE to notify the first network side device of current location information of the UE. 12. The first network side device according to claim 9, wherein the configuration information further comprises a cell reselection parameter for performing cell reselection by the UE. 13. The first network side device according to claim 9, wherein the transmitter is further configured to: send the configuration information to the UE by using a radio resource control (RRC) message, wherein the RRC message comprises an RRC reconfiguration message or an RRC connection release message. 14. The first network side device according to claim 9, wherein the transmitter is further configured to send the context information of the UE to a core network device; and the first network side device further comprises: a processor, configured to release the context information of the UE when the first network side device stores the context information of the UE for a preset time. 15. The first network side device according to claim 9, further comprising: a receiver, configured to receive a radio resource control (RRC) resume request message sent by the UE, wherein the RRC resume request message comprises at least one of cell information of the first network side device, a cell radio network temporary identity (C-RNTI) of the UE, or indication information for requesting to restore the UE to the connection state, wherein: the transmitter is further configured to send an RRC resume confirmation message to the UE, wherein the RRC resume confirmation message comprises a parameter for extending, deleting, or modifying the context information of the UE by the first network side device; and the receiver is further configured to receive an RRC connection resume completion message sent by the UE. 16. The first network side device according to claim 9, wherein the receiver is further configured to receive an uplink scheduling request or a random access request sent by the UE; and the transmitter is further configured to send an uplink resource to the UE, wherein the uplink resource is used by the UE to send uplink data to the first network side device or to receive downlink data from the first network side device according to the uplink resource. 17. A second network side device, comprising: a receiver, configured to receive indication information of a user equipment (UE) in an intermediate state that is sent by a first network side device, wherein the intermediate state is a state that, when the UE stores context information of the UE, if the UE moves and a cell movement range falls within a coverage area of a cell or a base station comprised in a list sent by the first network side device, then the UE performs cell reselection according to the list; and a processor, configured to obtain the context information of the UE from the first network side device, or obtain the context information of the UE from the second network side device when the receiver receives a radio resource control (RRC) resume request message sent by the UE, so as to receive uplink data from the UE or send downlink data to the UE. 18. The second network side device according to claim 17, wherein the receiver is further configured to receive and save context information of the UE that is sent by the first network side device. 19. The second network side device according to claim 17, wherein the RRC resume request message is sent by the UE to the second network side device when the UE is restored from the intermediate state to a connection state, and wherein the processor is further configured to: if a memory stores the context information of the UE when the receiver receives the RRC resume request message sent by the UE, obtain the context information of the UE from the memory; or if a memory does not store the context information of the UE before the receiver receives the RRC resume request message sent by the UE, obtain the context information of the UE from the first network side device. 20. The second network side device according to claim 17, wherein the second network side device further comprises: a transmitter, configured to: when the second network side device receives a paging message sent by a core network device and determines that there is the downlink data that needs to be sent to the UE, send the paging message to the UE, and after the second network side device establishes an RRC connection to the UE, send, to the UE, the downlink data received from the core network device; or when a paging message sent by the first network side device is received and it is determined that there is the downlink data that needs to be sent to the UE, send the paging message to the UE, and after the second network side device establishes an RRC connection to the UE, send, to the UE, the downlink data received from the first network side device. 21. The UE according to claim 1, wherein in a process in which the processor is configured to perform cell reselection, when it is determined that the UE needs to receive downlink data from the network side, the processor is further configured to restore the UE to the connection state; and wherein the receiver is configured to receive the downlink data from the network side device that has been restored to the connection state, wherein the network side device that has been restored to the connection state comprises the first network side device or the second network side device.
Disclosed are a signaling optimization method and device, to resolve a problem of heavy signaling overheads and long data transmission delay when a user equipment (UE) accesses a network side. The method includes: receiving, by the UE, configuration information sent by a first network side device, where the configuration information includes a list, and the list is a cell list or a base station list; and entering, by the UE, an intermediate state according to the configuration information, where the intermediate state means that: when the UE stores context information of the UE, if the UE moves and a cell movement range falls within a coverage area of a cell or a base station included in the list, the UE performs cell reselection according to the list. Embodiments of the present disclosure are applicable to optimization of signaling transmission between the UE and a network side.1. A user equipment (UE), comprising: a receiver, configured to receive configuration information sent by a first network side device, wherein the configuration information comprises a list, and the list is a cell list or a base station list; and a processor, configured to cause the UE enter an intermediate state according to the configuration information, wherein the intermediate state is a state such that when the UE moves and a cell movement range falls within a coverage area of a cell or a base station comprised in the list, the UE performs cell reselection according to the list. 2. The UE according to claim 1, wherein when the UE is in the intermediate state, if the cell movement range of the UE falls beyond the coverage area of the cell or the base station comprised in the list, then the UE sends a notification message to a network side device on which a current serving cell of the UE is located, so that the UE is restored from the intermediate state to a connection state or returns to an idle state, and the network side device on which the current serving cell is located is the first network side device or a second network side device after the UE performs cell reselection. 3. The UE according to claim 1, wherein the configuration information further comprises one or more of: a condition under which the first network side device instructs the UE to enter the intermediate state according to the configuration information, and the condition comprises entering the intermediate state immediately or entering an idle state after a preset time; a time period in which the first network side device instructs the UE to enter the intermediate state according to the configuration information; and an operation instruction performed after the UE enters the intermediate state during the time period according to the configuration information, wherein the operation instruction is used to instruct the UE to enter the idle state and/or instruct the UE to notify the first network side device of current location information of the UE. 4. The UE according to claim 1, wherein the configuration information further comprises a cell reselection parameter for performing cell reselection by the UE. 5. The UE according to claim 1, wherein the configuration information is carried in a radio resource control (RRC) message by the first network side device, and the RRC message comprises an RRC connection establishment message, an RRC reconfiguration message, or an RRC connection release message. 6. The UE according to claim 1, wherein in a process in which the processor is configured to perform cell reselection, when it is determined that the UE needs to send uplink data to a network side from the network side, the processor is further configured to restore the UE to the connection state; and wherein the UE further comprises: a transmitter, configured to send the uplink data to a network side device that has been restored to the connection state, wherein the network side device that has been restored to the connection state comprises the first network side device or the second network side device. 7. The UE according to claim 6, wherein the transmitter is further configured to: send a scheduling request or a random access request to the first network side device when the cell of the UE does not change, to restore the UE to the connection state. 8. The UE according to claim 6, wherein: the transmitter is further configured to send a radio resource control (RRC) resume request message to the first network side device or the second network side device, wherein the RRC resume request message comprises at least one of cell information of the first network side device, a cell radio network temporary identity (C-RNTI) of the UE, or indication information for requesting to restore the UE to the connection state; the receiver is further configured to receive an RRC resume confirmation message sent by the first network side device or the second network side device, wherein the RRC resume confirmation message is used to indicate that the UE is restored to the connection state; and the transmitter is further configured to send an RRC connection resume completion message to the first network side device or the second network side device. 9. A first network side device, comprising: a transmitter, configured to send configuration information to a user equipment (UE), wherein the configuration information comprises a list, wherein the list is a cell list or a base station list, wherein the configuration information is used to instruct the UE to enter an intermediate state, and wherein the intermediate state is a state that, when the UE stores context information of the UE, if the UE moves and a cell movement range falls within a coverage area of a cell or a base station comprised in the list, then the UE performs cell reselection according to the list, wherein the transmitter is further configured to send the context information of the UE to a network side device in the list. 10. The first network side device according to claim 9, wherein the configuration information is further used to indicate that: after the UE enters the intermediate state, if the UE moves and the cell movement range falls beyond the coverage area of the cell or the base station comprised in the list, the UE sends a notification message to a network side device on which a current serving cell of the UE is located, so that the UE is restored from the intermediate state to a connection state or returns to an idle state, and the network side device on which the current serving cell is located is the first network side device or a second network side device after the UE performs cell reselection. 11. The first network side device according to claim 9, wherein the configuration information further comprises one or more of: a condition under which the first network side device instructs the UE to enter the intermediate state according to the configuration information, and the condition comprises entering the intermediate state immediately or entering the intermediate state after a preset time; a time period in which the first network side device instructs the UE to enter the intermediate state according to the configuration information; and an operation instruction performed after the UE enters the intermediate state during the time period according to the configuration information, and the operation instruction is used to instruct the UE to enter an idle state and/or instruct the UE to notify the first network side device of current location information of the UE. 12. The first network side device according to claim 9, wherein the configuration information further comprises a cell reselection parameter for performing cell reselection by the UE. 13. The first network side device according to claim 9, wherein the transmitter is further configured to: send the configuration information to the UE by using a radio resource control (RRC) message, wherein the RRC message comprises an RRC reconfiguration message or an RRC connection release message. 14. The first network side device according to claim 9, wherein the transmitter is further configured to send the context information of the UE to a core network device; and the first network side device further comprises: a processor, configured to release the context information of the UE when the first network side device stores the context information of the UE for a preset time. 15. The first network side device according to claim 9, further comprising: a receiver, configured to receive a radio resource control (RRC) resume request message sent by the UE, wherein the RRC resume request message comprises at least one of cell information of the first network side device, a cell radio network temporary identity (C-RNTI) of the UE, or indication information for requesting to restore the UE to the connection state, wherein: the transmitter is further configured to send an RRC resume confirmation message to the UE, wherein the RRC resume confirmation message comprises a parameter for extending, deleting, or modifying the context information of the UE by the first network side device; and the receiver is further configured to receive an RRC connection resume completion message sent by the UE. 16. The first network side device according to claim 9, wherein the receiver is further configured to receive an uplink scheduling request or a random access request sent by the UE; and the transmitter is further configured to send an uplink resource to the UE, wherein the uplink resource is used by the UE to send uplink data to the first network side device or to receive downlink data from the first network side device according to the uplink resource. 17. A second network side device, comprising: a receiver, configured to receive indication information of a user equipment (UE) in an intermediate state that is sent by a first network side device, wherein the intermediate state is a state that, when the UE stores context information of the UE, if the UE moves and a cell movement range falls within a coverage area of a cell or a base station comprised in a list sent by the first network side device, then the UE performs cell reselection according to the list; and a processor, configured to obtain the context information of the UE from the first network side device, or obtain the context information of the UE from the second network side device when the receiver receives a radio resource control (RRC) resume request message sent by the UE, so as to receive uplink data from the UE or send downlink data to the UE. 18. The second network side device according to claim 17, wherein the receiver is further configured to receive and save context information of the UE that is sent by the first network side device. 19. The second network side device according to claim 17, wherein the RRC resume request message is sent by the UE to the second network side device when the UE is restored from the intermediate state to a connection state, and wherein the processor is further configured to: if a memory stores the context information of the UE when the receiver receives the RRC resume request message sent by the UE, obtain the context information of the UE from the memory; or if a memory does not store the context information of the UE before the receiver receives the RRC resume request message sent by the UE, obtain the context information of the UE from the first network side device. 20. The second network side device according to claim 17, wherein the second network side device further comprises: a transmitter, configured to: when the second network side device receives a paging message sent by a core network device and determines that there is the downlink data that needs to be sent to the UE, send the paging message to the UE, and after the second network side device establishes an RRC connection to the UE, send, to the UE, the downlink data received from the core network device; or when a paging message sent by the first network side device is received and it is determined that there is the downlink data that needs to be sent to the UE, send the paging message to the UE, and after the second network side device establishes an RRC connection to the UE, send, to the UE, the downlink data received from the first network side device. 21. The UE according to claim 1, wherein in a process in which the processor is configured to perform cell reselection, when it is determined that the UE needs to receive downlink data from the network side, the processor is further configured to restore the UE to the connection state; and wherein the receiver is configured to receive the downlink data from the network side device that has been restored to the connection state, wherein the network side device that has been restored to the connection state comprises the first network side device or the second network side device.
2,600
10,880
10,880
14,628,099
2,616
Examples are disclosed that relate to selectively dimming or occluding light from a real-world background to enhance the display of virtual objects on a near-eye display. One example provides a near-eye display system including a see-through display, an image source, a background light sensor, a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel positioned between a pair of polarizers, and a computing device including instructions executable by a logic subsystem to determine a shape and a position of an occlusion area based upon a virtual object to be displayed, obtain a first and a second birefringence pattern for the first and the second liquid crystal panels, produce the occlusion area by applying the birefringence patterns to the liquid crystal panels, and display the virtual object in a location visually overlapping with the occlusion area.
1. A near-eye display system, comprising: a see-through display; an image source configured to produce images for display on the see-through display; a background light sensor configured to sense a brightness of a real-world background; a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel spaced from the first liquid crystal panel, the first liquid crystal panel and the second liquid crystal panel being positioned between a common pair of polarizers; and a computing device comprising a logic subsystem and a storage subsystem storing instructions executable by the logic subsystem to: determine a shape and a position of an occlusion area based upon a virtual object to be displayed on the see-through display, obtain a first birefringence pattern for the first liquid crystal panel and a second birefringence pattern for the second liquid crystal panel based upon the shape and the position of the occlusion area, produce the occlusion area by applying the first birefringence pattern to the first liquid crystal panel and the second birefringence pattern to the second liquid crystal pattern, and display the virtual object in a location visually overlapping with the occlusion area. 2. The near-eye display system of claim 1, wherein the instructions are further executable by the logic subsystem to obtain the first and second birefringence patterns by segmenting a perimeter of the occlusion area into perimeter segments, obtaining birefringence patterns for each of the perimeter segments, and constructing the first and second birefringence patterns from the birefringence patterns for the perimeter segments. 3. The near-eye display system of claim 1, wherein the instructions are further executable by the logic subsystem to obtain the first and second birefringence patterns by obtaining a birefringence pattern for an overall shape of the occlusion area. 4. The near-eye display system of claim 1, wherein the background light sensor comprises an image sensor configured to acquire an image of the real-world background, and wherein the instructions are further executable by the logic subsystem to determine the shape and the position of the occlusion area by determining the shape and the position based at least partially on one or more brightness features in the image of the real-world background. 5. The near-eye display system of claim 1, further comprising a gaze-tracking subsystem, and wherein the instructions are further executable by the logic subsystem to track a gaze position of a user's eye via the gaze-tracking subsystem, and to determine the position of the occlusion area by determining the position based upon the gaze position. 6. The near-eye display system of claim 5, wherein the instructions are further executable by the logic subsystem to control a focus distance of the occlusion area by scaling the second birefringence pattern for the second liquid crystal panel based upon one or more of depth information and the gaze position. 7. The near-eye display system of claim 1, wherein the instructions are further executable by the logic subsystem to modify the first birefringence pattern on the first liquid crystal panel and modify the second birefringence pattern on the second liquid crystal panel to move the position of the occlusion area based upon a detected change in relative position of the virtual object and the real-world background. 8. The near-eye display system of claim 1, further comprising a third liquid crystal panel positioned between the common pair of polarizers, and wherein the instructions are further executable by the logic subsystem to obtain a third birefringence pattern for the third liquid crystal panel. 9. The near-eye display system of claim 1, wherein the first liquid crystal panel and the second liquid crystal panel each comprises a passive-matrix liquid crystal panel. 10. On a near-eye display system comprising a see-through display and a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel positioned between a common pair of polarizers, a method of selectively dimming light from one or more areas of a real-world background, the method comprising: determining a shape and a position of an occlusion area based upon a virtual object to be displayed on the see-through display; obtaining a first birefringence pattern for the first liquid crystal panel and a second birefringence pattern for the second liquid crystal panel based upon the shape and the position of the occlusion area; producing the occlusion area by applying the first birefringence pattern to the first liquid crystal panel and the second birefringence pattern to the second liquid crystal pattern, and displaying the virtual object in a location visually overlapping with the occlusion area. 11. The method of claim 10, wherein obtaining the first birefringence pattern and the second birefringence pattern comprises segmenting a perimeter of the occlusion area into perimeter segments, obtaining birefringence patterns for the perimeter segments, and constructing the first and second birefringence patterns from the birefringence patterns for the perimeter segments. 12. The method of claim 10, wherein obtaining the first and second birefringence patterns comprises obtaining a birefringence pattern for an overall shape of the occlusion area. 13. The method of claim 10, further comprising modifying the first birefringence pattern on the first liquid crystal panel and modifying the second birefringence pattern on the second liquid crystal panel to move the position of the occlusion area based upon a detected change in relative position of the virtual object and the real-world background. 14. The method of claim 10, wherein determining the shape and the position of the occlusion area comprises acquiring an image of the real-world background and determining the shape and the position based at least partially on one or more brightness features in the image of the real-world background. 15. The method of claim 10, further comprising tracking a gaze position of a user's eye, and wherein determining the position of the occlusion area further comprises determining the position based upon the gaze position. 16. The method of claim 15, further comprising controlling a focus distance of the occlusion area by scaling the second birefringence pattern for the second liquid crystal panel based upon one or more of depth information and the gaze position. 17. The method of claim 10, wherein the selective background occluder comprises a third liquid crystal panel positioned between the common pair of polarizers, and further comprising obtaining a third birefringence pattern for the third liquid crystal panel. 18. A near-eye display system, comprising: a see-through display; an image source configured to produce images for display on the see-through display; a background light sensor configured to sense a brightness of a real-world background; a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel spaced from the first liquid crystal panel, the first liquid crystal panel and the second liquid crystal panel being positioned between a common pair of polarizers; and a computing device comprising a logic subsystem and a storage subsystem storing instructions executable by the logic subsystem to determine a shape of an occlusion area based upon a virtual object to be displayed via the see-through display, and operate the first liquid crystal panel and the second liquid crystal panel such that light passing through a first pixel of the first liquid crystal panel and a first pixel of the second liquid crystal panel is attenuated differently than light passing through the first pixel of the first liquid crystal panel and a second pixel of the second liquid crystal panel. 19. The near-eye display system of claim 18, wherein the background light sensor comprises an image sensor configured to acquire an image of the real-world background, and wherein the instructions are further executable by the logic subsystem to determine the shape and the position of the occlusion area by determining the shape and the position based at least partially on one or more brightness features in the image of the real-world background. 20. The near-eye display system of claim 19, wherein the instructions are further executable by the logic subsystem to operate the first liquid crystal panel and the second liquid crystal panel by applying a first birefringence pattern to the first liquid crystal panel and a second birefringence pattern to the second liquid crystal panel based upon the shape and the position of the occlusion area.
Examples are disclosed that relate to selectively dimming or occluding light from a real-world background to enhance the display of virtual objects on a near-eye display. One example provides a near-eye display system including a see-through display, an image source, a background light sensor, a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel positioned between a pair of polarizers, and a computing device including instructions executable by a logic subsystem to determine a shape and a position of an occlusion area based upon a virtual object to be displayed, obtain a first and a second birefringence pattern for the first and the second liquid crystal panels, produce the occlusion area by applying the birefringence patterns to the liquid crystal panels, and display the virtual object in a location visually overlapping with the occlusion area.1. A near-eye display system, comprising: a see-through display; an image source configured to produce images for display on the see-through display; a background light sensor configured to sense a brightness of a real-world background; a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel spaced from the first liquid crystal panel, the first liquid crystal panel and the second liquid crystal panel being positioned between a common pair of polarizers; and a computing device comprising a logic subsystem and a storage subsystem storing instructions executable by the logic subsystem to: determine a shape and a position of an occlusion area based upon a virtual object to be displayed on the see-through display, obtain a first birefringence pattern for the first liquid crystal panel and a second birefringence pattern for the second liquid crystal panel based upon the shape and the position of the occlusion area, produce the occlusion area by applying the first birefringence pattern to the first liquid crystal panel and the second birefringence pattern to the second liquid crystal pattern, and display the virtual object in a location visually overlapping with the occlusion area. 2. The near-eye display system of claim 1, wherein the instructions are further executable by the logic subsystem to obtain the first and second birefringence patterns by segmenting a perimeter of the occlusion area into perimeter segments, obtaining birefringence patterns for each of the perimeter segments, and constructing the first and second birefringence patterns from the birefringence patterns for the perimeter segments. 3. The near-eye display system of claim 1, wherein the instructions are further executable by the logic subsystem to obtain the first and second birefringence patterns by obtaining a birefringence pattern for an overall shape of the occlusion area. 4. The near-eye display system of claim 1, wherein the background light sensor comprises an image sensor configured to acquire an image of the real-world background, and wherein the instructions are further executable by the logic subsystem to determine the shape and the position of the occlusion area by determining the shape and the position based at least partially on one or more brightness features in the image of the real-world background. 5. The near-eye display system of claim 1, further comprising a gaze-tracking subsystem, and wherein the instructions are further executable by the logic subsystem to track a gaze position of a user's eye via the gaze-tracking subsystem, and to determine the position of the occlusion area by determining the position based upon the gaze position. 6. The near-eye display system of claim 5, wherein the instructions are further executable by the logic subsystem to control a focus distance of the occlusion area by scaling the second birefringence pattern for the second liquid crystal panel based upon one or more of depth information and the gaze position. 7. The near-eye display system of claim 1, wherein the instructions are further executable by the logic subsystem to modify the first birefringence pattern on the first liquid crystal panel and modify the second birefringence pattern on the second liquid crystal panel to move the position of the occlusion area based upon a detected change in relative position of the virtual object and the real-world background. 8. The near-eye display system of claim 1, further comprising a third liquid crystal panel positioned between the common pair of polarizers, and wherein the instructions are further executable by the logic subsystem to obtain a third birefringence pattern for the third liquid crystal panel. 9. The near-eye display system of claim 1, wherein the first liquid crystal panel and the second liquid crystal panel each comprises a passive-matrix liquid crystal panel. 10. On a near-eye display system comprising a see-through display and a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel positioned between a common pair of polarizers, a method of selectively dimming light from one or more areas of a real-world background, the method comprising: determining a shape and a position of an occlusion area based upon a virtual object to be displayed on the see-through display; obtaining a first birefringence pattern for the first liquid crystal panel and a second birefringence pattern for the second liquid crystal panel based upon the shape and the position of the occlusion area; producing the occlusion area by applying the first birefringence pattern to the first liquid crystal panel and the second birefringence pattern to the second liquid crystal pattern, and displaying the virtual object in a location visually overlapping with the occlusion area. 11. The method of claim 10, wherein obtaining the first birefringence pattern and the second birefringence pattern comprises segmenting a perimeter of the occlusion area into perimeter segments, obtaining birefringence patterns for the perimeter segments, and constructing the first and second birefringence patterns from the birefringence patterns for the perimeter segments. 12. The method of claim 10, wherein obtaining the first and second birefringence patterns comprises obtaining a birefringence pattern for an overall shape of the occlusion area. 13. The method of claim 10, further comprising modifying the first birefringence pattern on the first liquid crystal panel and modifying the second birefringence pattern on the second liquid crystal panel to move the position of the occlusion area based upon a detected change in relative position of the virtual object and the real-world background. 14. The method of claim 10, wherein determining the shape and the position of the occlusion area comprises acquiring an image of the real-world background and determining the shape and the position based at least partially on one or more brightness features in the image of the real-world background. 15. The method of claim 10, further comprising tracking a gaze position of a user's eye, and wherein determining the position of the occlusion area further comprises determining the position based upon the gaze position. 16. The method of claim 15, further comprising controlling a focus distance of the occlusion area by scaling the second birefringence pattern for the second liquid crystal panel based upon one or more of depth information and the gaze position. 17. The method of claim 10, wherein the selective background occluder comprises a third liquid crystal panel positioned between the common pair of polarizers, and further comprising obtaining a third birefringence pattern for the third liquid crystal panel. 18. A near-eye display system, comprising: a see-through display; an image source configured to produce images for display on the see-through display; a background light sensor configured to sense a brightness of a real-world background; a selective background occluder comprising a first liquid crystal panel and a second liquid crystal panel spaced from the first liquid crystal panel, the first liquid crystal panel and the second liquid crystal panel being positioned between a common pair of polarizers; and a computing device comprising a logic subsystem and a storage subsystem storing instructions executable by the logic subsystem to determine a shape of an occlusion area based upon a virtual object to be displayed via the see-through display, and operate the first liquid crystal panel and the second liquid crystal panel such that light passing through a first pixel of the first liquid crystal panel and a first pixel of the second liquid crystal panel is attenuated differently than light passing through the first pixel of the first liquid crystal panel and a second pixel of the second liquid crystal panel. 19. The near-eye display system of claim 18, wherein the background light sensor comprises an image sensor configured to acquire an image of the real-world background, and wherein the instructions are further executable by the logic subsystem to determine the shape and the position of the occlusion area by determining the shape and the position based at least partially on one or more brightness features in the image of the real-world background. 20. The near-eye display system of claim 19, wherein the instructions are further executable by the logic subsystem to operate the first liquid crystal panel and the second liquid crystal panel by applying a first birefringence pattern to the first liquid crystal panel and a second birefringence pattern to the second liquid crystal panel based upon the shape and the position of the occlusion area.
2,600
10,881
10,881
12,645,037
2,691
A controlling device has a moveable touch sensitive panel positioned above a plurality of switches. When the controlling device senses an activation of at least one of the plurality of switches when caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface, the controlling device responds by transmitting a signal to an appliance wherein the signal is reflective of the input location upon the touch sensitive surface.
1. A controlling device, comprising: a casing having an opening; and an input device disposed in the opening comprised of a moveable touch sensitive panel positioned above a plurality of switches; wherein the controlling device responds to an activation of at least one of the plurality of switches caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface by transmitting a signal to an appliance that is reflective of the input location upon the touch sensitive surface. 2. The controlling device as recited in claim 1, wherein the signal comprises a command to control a functional operation of the appliance. 3. The controlling device as recited in claim 2, wherein a plurality of surface touch zones are defined for the touch sensitive panel and the command corresponds to a one of the plurality of surface touch zones having the input location upon the touch sensitive panel. 4. The controlling device as recited in claim 3, wherein the plurality of surface touch zones for the touch sensitive panel are defined as a function of an active one of a plurality of operational modes for the controlling device. 5. The controlling device as recited in claim 1, wherein the signal comprises data representative of coordinates for the input location upon the touch sensitive surface. 6. The controlling device as recited in claim 1, wherein the touch sensitive panel comprises a keycap disposed over a multiple-electrode capacitive touch sensor. 7. The controlling device as recited in claim 6, wherein the plurality of switches comprise silicon rubber keypad buttons supported upon a printed circuit board. 8. The controlling device as recited in claim 1, wherein the keycap displays a plurality of user interface elements. 9. The controlling device as recited in claim 8, wherein a plurality of surface touch zones are defined for the touch sensitive panel each corresponding to a respective one of the plurality of user interface elements. 10. The controlling device as recited in claim 9, wherein the plurality of user interface elements displayed by the keycap is defined as a function of an active one of a plurality of operational modes for the controlling device. 11. A controlling device, comprising: a casing having an opening; and an input device disposed in the opening comprised of a moveable touch sensitive panel having a plurality of defined surface touch zones positioned above a plurality of switches; wherein the controlling device responds to an activation of at least one of the plurality of switches caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface by transmitting a signal to an appliance that is reflective of a one of the plurality of defined surface touch zones determined to include the input location upon the touch sensitive surface. 12. The controlling device as recited in claim 11, wherein the signal comprises a command to control a functional operation of the appliance. 13. The controlling device as recited in claim 11, wherein the plurality of surface touch zones for the touch sensitive panel are defined as a function of an active one of a plurality of operational modes for the controlling device. 14. The controlling device as recited in claim 11, wherein the touch sensitive panel comprises a keycap disposed over a multiple-electrode capacitive touch sensor. 15. The controlling device as recited in claim 14, wherein the keycap displays a plurality of user interface elements. 16. The controlling device as recited in claim 15, wherein the plurality of surface touch zones defined for the touch sensitive panel each corresponding to a respective one of the plurality of user interface elements. 17. The controlling device as recited in claim 16, wherein the plurality of user interface elements displayed by the keycap is defined as a function of an active one of a plurality of operational modes for the controlling device. 18. A method for using a controlling device, comprising a casing having an opening and an input device disposed in the opening comprised of a moveable touch sensitive panel positioned above a plurality of switches, to transmit a signal to an appliance, the method comprising: sensing by the controlling device an activation of at least one of the plurality of switches caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface; and in response to the sensed activation of at least one of the plurality of switches, causing the controlling device to transmit the signal to the appliance wherein the signal is reflective of the input location upon the touch sensitive surface. 19. The method as recited in claim 18, wherein the signal comprises a command to control a functional operation of the appliance. 20. The method as recited in claim 19, comprising defining a plurality of surface touch zones for the touch sensitive panel whereby the command corresponds to a one of the plurality of surface touch zones having the input location upon the touch sensitive panel. 21. The method as recited in claim 20, comprising defining the plurality of surface touch zones for the touch sensitive panel as a function of an active one of a plurality of operational modes for the controlling device. 22. The method as recited in claim 18, wherein the signal comprises data representative of coordinates for the input location upon the touch sensitive surface. 23. The method as recited in claim 18, wherein the touch sensitive panel comprises a keycap disposed over a multiple-electrode capacitive touch sensor and comprising using the keycap to display a plurality of user interface elements. 24. The method as recited in claim 23, comprising defining a plurality of surface touch zones for the touch sensitive panel each corresponding to a respective one of the plurality of user interface elements. 25. The method as recited in claim 24, comprising using an active one of a plurality of operational modes for the controlling device to determine the plurality of user interface elements to be displayed.
A controlling device has a moveable touch sensitive panel positioned above a plurality of switches. When the controlling device senses an activation of at least one of the plurality of switches when caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface, the controlling device responds by transmitting a signal to an appliance wherein the signal is reflective of the input location upon the touch sensitive surface.1. A controlling device, comprising: a casing having an opening; and an input device disposed in the opening comprised of a moveable touch sensitive panel positioned above a plurality of switches; wherein the controlling device responds to an activation of at least one of the plurality of switches caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface by transmitting a signal to an appliance that is reflective of the input location upon the touch sensitive surface. 2. The controlling device as recited in claim 1, wherein the signal comprises a command to control a functional operation of the appliance. 3. The controlling device as recited in claim 2, wherein a plurality of surface touch zones are defined for the touch sensitive panel and the command corresponds to a one of the plurality of surface touch zones having the input location upon the touch sensitive panel. 4. The controlling device as recited in claim 3, wherein the plurality of surface touch zones for the touch sensitive panel are defined as a function of an active one of a plurality of operational modes for the controlling device. 5. The controlling device as recited in claim 1, wherein the signal comprises data representative of coordinates for the input location upon the touch sensitive surface. 6. The controlling device as recited in claim 1, wherein the touch sensitive panel comprises a keycap disposed over a multiple-electrode capacitive touch sensor. 7. The controlling device as recited in claim 6, wherein the plurality of switches comprise silicon rubber keypad buttons supported upon a printed circuit board. 8. The controlling device as recited in claim 1, wherein the keycap displays a plurality of user interface elements. 9. The controlling device as recited in claim 8, wherein a plurality of surface touch zones are defined for the touch sensitive panel each corresponding to a respective one of the plurality of user interface elements. 10. The controlling device as recited in claim 9, wherein the plurality of user interface elements displayed by the keycap is defined as a function of an active one of a plurality of operational modes for the controlling device. 11. A controlling device, comprising: a casing having an opening; and an input device disposed in the opening comprised of a moveable touch sensitive panel having a plurality of defined surface touch zones positioned above a plurality of switches; wherein the controlling device responds to an activation of at least one of the plurality of switches caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface by transmitting a signal to an appliance that is reflective of a one of the plurality of defined surface touch zones determined to include the input location upon the touch sensitive surface. 12. The controlling device as recited in claim 11, wherein the signal comprises a command to control a functional operation of the appliance. 13. The controlling device as recited in claim 11, wherein the plurality of surface touch zones for the touch sensitive panel are defined as a function of an active one of a plurality of operational modes for the controlling device. 14. The controlling device as recited in claim 11, wherein the touch sensitive panel comprises a keycap disposed over a multiple-electrode capacitive touch sensor. 15. The controlling device as recited in claim 14, wherein the keycap displays a plurality of user interface elements. 16. The controlling device as recited in claim 15, wherein the plurality of surface touch zones defined for the touch sensitive panel each corresponding to a respective one of the plurality of user interface elements. 17. The controlling device as recited in claim 16, wherein the plurality of user interface elements displayed by the keycap is defined as a function of an active one of a plurality of operational modes for the controlling device. 18. A method for using a controlling device, comprising a casing having an opening and an input device disposed in the opening comprised of a moveable touch sensitive panel positioned above a plurality of switches, to transmit a signal to an appliance, the method comprising: sensing by the controlling device an activation of at least one of the plurality of switches caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface; and in response to the sensed activation of at least one of the plurality of switches, causing the controlling device to transmit the signal to the appliance wherein the signal is reflective of the input location upon the touch sensitive surface. 19. The method as recited in claim 18, wherein the signal comprises a command to control a functional operation of the appliance. 20. The method as recited in claim 19, comprising defining a plurality of surface touch zones for the touch sensitive panel whereby the command corresponds to a one of the plurality of surface touch zones having the input location upon the touch sensitive panel. 21. The method as recited in claim 20, comprising defining the plurality of surface touch zones for the touch sensitive panel as a function of an active one of a plurality of operational modes for the controlling device. 22. The method as recited in claim 18, wherein the signal comprises data representative of coordinates for the input location upon the touch sensitive surface. 23. The method as recited in claim 18, wherein the touch sensitive panel comprises a keycap disposed over a multiple-electrode capacitive touch sensor and comprising using the keycap to display a plurality of user interface elements. 24. The method as recited in claim 23, comprising defining a plurality of surface touch zones for the touch sensitive panel each corresponding to a respective one of the plurality of user interface elements. 25. The method as recited in claim 24, comprising using an active one of a plurality of operational modes for the controlling device to determine the plurality of user interface elements to be displayed.
2,600
10,882
10,882
15,647,947
2,693
Apparatus, system and method for using a controlling device for receiving voice input to control the operation of voice controlled smart appliances and, more particularly, to a controlling device for recognizing voice commands and for use in routing a signal, based on voice commands from the user, to two or more voice controlled smart appliances from different consumer brand names.
1. A controlling device for providing formatted voice data to two or more smart appliances, comprising: an electronic storage medium having processor-readable code embodied therein and storing a plurality of device profiles, wherein each device profile comprises a formatting protocol for formatting voice commands in conformity with a protocol used by at least one of the two or more smart appliances; a first communication interface for transmitting the formatted voice commands to at least one of the two or more smart appliances; a microphone for receiving voice input; and a processor, coupled to the electronic storage medium, the communication interface, and the microphone for executing the processor-readable code that causes the controlling device to perform steps comprising: receiving, by the processor, via the microphone, a first voice command; determining, by the processor, a first smart appliance to which the first voice command is intended; identifying, by the processor, a first formatting protocol in the electronic storage medium associated with the first smart appliance; formatting, by the processor, the voice command into a formatted voice command in conformance with the first formatting protocol; and transmitting, by the processor, via the communication interface the formatted voice command to the first smart appliance. 2. The controlling device as recited in claim 1, further comprising: a second communication interface; wherein the processor-readable code that causes the processor to determine a first smart appliance to which the first voice command is intended further includes instructions that cause the controlling device to perform steps comprising: transmitting, by the processor, via the second communication interface, the first voice command to a voice-processing server over a wide-area network; and receiving, by the processor, via the second communication interface, a message from the speech-processing service, the message comprising an identification of a first smart appliance to which the voice command is intended. 3. The controlling device as recited in claim 1, wherein the first communication interface comprises a first transmitter for transmitting the formatted voice command to the first smart appliance using a first communication technology, and a second transmitter for transmitting a second formatted voice command to a second smart appliance using a second communication technology. 4. The controlling device as recited in claim 1, wherein the processor-readable code that causes the controlling device to determine a first smart appliance to which the first voice command is intended comprises further instructions that cause the controlling device to: transmit, by the processor, via the communication interface, the voice command to one of the two or more smart appliances, wherein the smart appliance causes the voice command to be transmitted to a speech-processing service; and receive, by the processor, via the communication interface, a message from the one of the two or more smart appliances, the message comprising an identification of the first smart appliance to which the voice command is intended. 5. A method for providing formatted voice data to two or more smart appliances, performed by a controlling device, having a processor, a microphone, an electronic storage medium, and a communication interface, in cooperation with a smart appliance, comprising: receiving, by the processor, via the microphone, a first voice command; determining, by the processor, a first smart appliance to which the first voice command is intended; identifying, by the processor, a first formatting protocol in the electronic storage medium that is associated with the first smart appliance; formatting, by the processor, the voice command into a formatted voice command in conformance with the first formatting protocol; and transmitting, by the processor, via the communication interface, the formatted voice command to the first smart appliance. 6. The method of claim 5, further comprising: identifying, by the processor, a second smart appliance owned by the user; and sending, by the processor via the communication interface, the formatted voice command to the second smart appliance. 7. The method of claim 5, further comprising: identifying, by the processor, a first formatting protocol in the electronic storage medium associated with the first smart appliance, wherein the protocol is a proprietary protocol. 8. The method of claim 5, further comprising: identifying, by the processor, a first formatting protocol in the electronic storage medium associated with the first smart appliance, wherein the protocol is a Voice over IP protocol. 9. A method for providing formatted voice data to two or more smart appliances, performed by a controlling device, having a processor, a microphone, an electronic storage medium, and a communication interface, in cooperation with a smart appliance, comprising: receiving, by the processor, via the communication interface, an identification of a smart appliance wherein the identification of the smart appliance is cross-referenced to a predetermined wake-word; storing, by the processor, via the electronic storage medium, the appliance identification; receiving, by the processor, via the microphone, at least a wake-word and a voice command from a user; determining, by the processor, a smart appliance identification, stored in the electronic storage medium, which corresponds to the received wake-word and the predetermined wake-word; and when the predetermined wake-word and the received wake-word match are determined to match, transmitting, by the processor, via the communication interface, the voice command to the intended smart appliance. 10. The method of claim 9, further comprising: receiving, by the processor, via the microphone, at least a wake-word from a user and a voice command, wherein the wake-word is an alphanumeric brand name. 11. The method of claim 9, further comprising: receiving, by the processor, via the microphone, at least a wake-word from a user and a voice command, wherein the wake-word is an alphanumeric code. 12. The method of claim 9, further comprising: receiving, by the processor, via the microphone, at least a wake-word from a user and a voice command, wherein the voice command is a dictation. 13. A method for providing formatted voice data to two or more smart appliances, performed by a controlling device in cooperation with a smart appliance, comprising: receiving, by a processor of the controlling device, via a microphone, a voice command from a user; in response to receiving the voice command, transmitting, by the processor of the controlling device, via a communication interface, an HDMI input status request to a coupled smart appliance; in response to the smart appliance receiving the HDMI input status request, causing, a processor of the smart appliance, to detect an active HDMI input, the active HDMI input comprising a signal from an appliance presently being presented by the smart appliance, to determine an appliance identification associated with the active HDMI input, and send, via the communication interface of the smart appliance, the smart appliance identification to the controlling device; receiving, by the processor of the controlling device, via the communication interface of the controlling device, the smart appliance identification; and formatting, by the processor of the controlling device, the voice command in accordance with a formatting protocol stored in an electronic storage medium of the controlling device associated with the appliance identification. 14. The method of claim 13, wherein determining an appliance identification associated with the active HDMI input further comprises: requesting, by the processor of the smart appliance, a smart appliance identification associated with a smart appliance connected to the active HDMI input; receiving, by the processor of the smart appliance, the smart appliance identification from the smart appliance connected to the active HDMI input; sending, by the processor of the smart appliance via the communication interface associated with the smart appliance, the smart appliance identification to the controlling device. 15. The method of claim 13, wherein determining an appliance identification associated with the active HDMI input further comprises: sending, by the processor of the smart appliance, via a second communication interface of the smart appliance, the HDMI input information to a remote server over a wide area network; receiving, by the remote server the HDMI input information; determining, by the remote server, a smart appliance identification based on the HDMI input information; sending the smart appliance identification to the smart appliance via the wide-area network; and receiving, by the processor of the smart appliance vis the second communication interface, the smart appliance identification. 16. A system for providing formatted voice data to two or more smart appliances, comprising: a remote server; a controlling device, for receiving a voice command from a user, via a microphone; a first smart appliance, comprising a processor readable code that causes the first smart appliance to perform the steps comprising; receiving, by a processor of the first smart appliance, a first voice command, via a communication interface, from a controlling device; formatting, by the processor of the first smart appliance, the first voice command into a formatted voice command in conformance with a first formatting protocol; transmitting, by the processor of the first smart appliance, via a communication interface the formatted voice command to a remote server, wherein a processor of the remote server receives the formatted voice command, via a communication interface, and uses the voice command to determine a second smart appliance to which the first voice command is intended; receiving, by the processor of the first smart appliance, a determination of the second smart appliance for which the voice command is intended, via a communication interface, from the remote server; and transmitting, by the processor of the first smart appliance, via the communication interface the formatted voice command to the intended second smart appliance. 17. The system as recited in claim 16, further comprising a processor readable code that causes the first smart appliance to perform the steps comprising; receiving, by the processor of the first smart appliance, a first voice command, via a communication interface, from a controlling device wherein receiving the first voice command causes the processor of the first smart appliance to perform the steps comprising; scanning, a local-area network for connected smart appliances, via a communication interface; transmitting, by the processor of the first smart appliance, via a communication interface, a state information request to each connected smart appliance; receiving, by the processor of the first smart appliance, via the communication interface, the state information from each connected smart appliance; and transmitting, the state information to the remote server for performing a determination of the second smart appliance for which the voice command is intended.
Apparatus, system and method for using a controlling device for receiving voice input to control the operation of voice controlled smart appliances and, more particularly, to a controlling device for recognizing voice commands and for use in routing a signal, based on voice commands from the user, to two or more voice controlled smart appliances from different consumer brand names.1. A controlling device for providing formatted voice data to two or more smart appliances, comprising: an electronic storage medium having processor-readable code embodied therein and storing a plurality of device profiles, wherein each device profile comprises a formatting protocol for formatting voice commands in conformity with a protocol used by at least one of the two or more smart appliances; a first communication interface for transmitting the formatted voice commands to at least one of the two or more smart appliances; a microphone for receiving voice input; and a processor, coupled to the electronic storage medium, the communication interface, and the microphone for executing the processor-readable code that causes the controlling device to perform steps comprising: receiving, by the processor, via the microphone, a first voice command; determining, by the processor, a first smart appliance to which the first voice command is intended; identifying, by the processor, a first formatting protocol in the electronic storage medium associated with the first smart appliance; formatting, by the processor, the voice command into a formatted voice command in conformance with the first formatting protocol; and transmitting, by the processor, via the communication interface the formatted voice command to the first smart appliance. 2. The controlling device as recited in claim 1, further comprising: a second communication interface; wherein the processor-readable code that causes the processor to determine a first smart appliance to which the first voice command is intended further includes instructions that cause the controlling device to perform steps comprising: transmitting, by the processor, via the second communication interface, the first voice command to a voice-processing server over a wide-area network; and receiving, by the processor, via the second communication interface, a message from the speech-processing service, the message comprising an identification of a first smart appliance to which the voice command is intended. 3. The controlling device as recited in claim 1, wherein the first communication interface comprises a first transmitter for transmitting the formatted voice command to the first smart appliance using a first communication technology, and a second transmitter for transmitting a second formatted voice command to a second smart appliance using a second communication technology. 4. The controlling device as recited in claim 1, wherein the processor-readable code that causes the controlling device to determine a first smart appliance to which the first voice command is intended comprises further instructions that cause the controlling device to: transmit, by the processor, via the communication interface, the voice command to one of the two or more smart appliances, wherein the smart appliance causes the voice command to be transmitted to a speech-processing service; and receive, by the processor, via the communication interface, a message from the one of the two or more smart appliances, the message comprising an identification of the first smart appliance to which the voice command is intended. 5. A method for providing formatted voice data to two or more smart appliances, performed by a controlling device, having a processor, a microphone, an electronic storage medium, and a communication interface, in cooperation with a smart appliance, comprising: receiving, by the processor, via the microphone, a first voice command; determining, by the processor, a first smart appliance to which the first voice command is intended; identifying, by the processor, a first formatting protocol in the electronic storage medium that is associated with the first smart appliance; formatting, by the processor, the voice command into a formatted voice command in conformance with the first formatting protocol; and transmitting, by the processor, via the communication interface, the formatted voice command to the first smart appliance. 6. The method of claim 5, further comprising: identifying, by the processor, a second smart appliance owned by the user; and sending, by the processor via the communication interface, the formatted voice command to the second smart appliance. 7. The method of claim 5, further comprising: identifying, by the processor, a first formatting protocol in the electronic storage medium associated with the first smart appliance, wherein the protocol is a proprietary protocol. 8. The method of claim 5, further comprising: identifying, by the processor, a first formatting protocol in the electronic storage medium associated with the first smart appliance, wherein the protocol is a Voice over IP protocol. 9. A method for providing formatted voice data to two or more smart appliances, performed by a controlling device, having a processor, a microphone, an electronic storage medium, and a communication interface, in cooperation with a smart appliance, comprising: receiving, by the processor, via the communication interface, an identification of a smart appliance wherein the identification of the smart appliance is cross-referenced to a predetermined wake-word; storing, by the processor, via the electronic storage medium, the appliance identification; receiving, by the processor, via the microphone, at least a wake-word and a voice command from a user; determining, by the processor, a smart appliance identification, stored in the electronic storage medium, which corresponds to the received wake-word and the predetermined wake-word; and when the predetermined wake-word and the received wake-word match are determined to match, transmitting, by the processor, via the communication interface, the voice command to the intended smart appliance. 10. The method of claim 9, further comprising: receiving, by the processor, via the microphone, at least a wake-word from a user and a voice command, wherein the wake-word is an alphanumeric brand name. 11. The method of claim 9, further comprising: receiving, by the processor, via the microphone, at least a wake-word from a user and a voice command, wherein the wake-word is an alphanumeric code. 12. The method of claim 9, further comprising: receiving, by the processor, via the microphone, at least a wake-word from a user and a voice command, wherein the voice command is a dictation. 13. A method for providing formatted voice data to two or more smart appliances, performed by a controlling device in cooperation with a smart appliance, comprising: receiving, by a processor of the controlling device, via a microphone, a voice command from a user; in response to receiving the voice command, transmitting, by the processor of the controlling device, via a communication interface, an HDMI input status request to a coupled smart appliance; in response to the smart appliance receiving the HDMI input status request, causing, a processor of the smart appliance, to detect an active HDMI input, the active HDMI input comprising a signal from an appliance presently being presented by the smart appliance, to determine an appliance identification associated with the active HDMI input, and send, via the communication interface of the smart appliance, the smart appliance identification to the controlling device; receiving, by the processor of the controlling device, via the communication interface of the controlling device, the smart appliance identification; and formatting, by the processor of the controlling device, the voice command in accordance with a formatting protocol stored in an electronic storage medium of the controlling device associated with the appliance identification. 14. The method of claim 13, wherein determining an appliance identification associated with the active HDMI input further comprises: requesting, by the processor of the smart appliance, a smart appliance identification associated with a smart appliance connected to the active HDMI input; receiving, by the processor of the smart appliance, the smart appliance identification from the smart appliance connected to the active HDMI input; sending, by the processor of the smart appliance via the communication interface associated with the smart appliance, the smart appliance identification to the controlling device. 15. The method of claim 13, wherein determining an appliance identification associated with the active HDMI input further comprises: sending, by the processor of the smart appliance, via a second communication interface of the smart appliance, the HDMI input information to a remote server over a wide area network; receiving, by the remote server the HDMI input information; determining, by the remote server, a smart appliance identification based on the HDMI input information; sending the smart appliance identification to the smart appliance via the wide-area network; and receiving, by the processor of the smart appliance vis the second communication interface, the smart appliance identification. 16. A system for providing formatted voice data to two or more smart appliances, comprising: a remote server; a controlling device, for receiving a voice command from a user, via a microphone; a first smart appliance, comprising a processor readable code that causes the first smart appliance to perform the steps comprising; receiving, by a processor of the first smart appliance, a first voice command, via a communication interface, from a controlling device; formatting, by the processor of the first smart appliance, the first voice command into a formatted voice command in conformance with a first formatting protocol; transmitting, by the processor of the first smart appliance, via a communication interface the formatted voice command to a remote server, wherein a processor of the remote server receives the formatted voice command, via a communication interface, and uses the voice command to determine a second smart appliance to which the first voice command is intended; receiving, by the processor of the first smart appliance, a determination of the second smart appliance for which the voice command is intended, via a communication interface, from the remote server; and transmitting, by the processor of the first smart appliance, via the communication interface the formatted voice command to the intended second smart appliance. 17. The system as recited in claim 16, further comprising a processor readable code that causes the first smart appliance to perform the steps comprising; receiving, by the processor of the first smart appliance, a first voice command, via a communication interface, from a controlling device wherein receiving the first voice command causes the processor of the first smart appliance to perform the steps comprising; scanning, a local-area network for connected smart appliances, via a communication interface; transmitting, by the processor of the first smart appliance, via a communication interface, a state information request to each connected smart appliance; receiving, by the processor of the first smart appliance, via the communication interface, the state information from each connected smart appliance; and transmitting, the state information to the remote server for performing a determination of the second smart appliance for which the voice command is intended.
2,600
10,883
10,883
15,851,760
2,664
Receive first context information (FCI) including entered in-field incident timeline information values from an in-field incident timeline application and a time associated with an entry of the FCI values. Access a mapping that maps in-field incident timeline information values to events having a pre-determined threshold confidence of occurring and identify an event associated with the received FCI. Determine a location associated with the entry of the FCI and a time period associated with the entry of the FCI. Access a camera location database and identify cameras that have a field of view including the location during the time period. Retrieve audio and/or video streams captured by the cameras during the time period. And provide the audio and/or video streams to machine learning training modules corresponding to machine learning models for detecting the event in and/or video streams for further training of the machine learning models.
1. A method at an electronic computing device for adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging, the method comprising: receiving, at the electronic computing device, first context information including one or more entered in-field incident timeline information values from an in-field incident timeline application and a time associated with an entry of the including one or more first context information values at the in-field incident timeline application; accessing, by the electronic computing device, an incident timeline information to detectable event mapping that maps in-field incident timeline information values to events having a pre-determined threshold confidence of occurring; identifying, by the electronic computing device, via the incident timeline information to detectable event mapping using the first context information, a particular event associated with the received first context information; determining, by the electronic computing device, a geographic location associated with the entry of the first context information and a time period relative to the time associated with the entry of the first context information; accessing, by the electronic computing device, an imaging camera location database and identifying, via the imaging camera location database, one or more particular imaging cameras that has or had a field of view including the determined geographic location during the determined time period; retrieving, by the electronic computing device, one or more audio and/or video streams captured by the one or more particular imaging cameras during the determined time period; identifying, by the electronic computing device, one or more machine learning training modules corresponding to one or more machine learning models for detecting the particular event in and/or video streams; and providing, by the electronic computing device, the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of corresponding machine learning models. 2. The method of claim 1, wherein at least a first particular imaging camera of the one or more particular imaging cameras is a body worn camera of a user assigned to or responding to the particular event. 3. The method of claim 1, wherein at least a first particular imaging camera of the one or more particular imaging cameras is a fixed camera unassociated with the particular event. 4. The method of claim 1, wherein the in-field incident timeline information values include a detected entry of an infraction issuance for violation of a law, rule, or ordinance by a first officer involved in the particular event at a mobile computing device associated with the first officer, the particular event is an underlying action in violation of the law, rule, or ordinance for which the violation is issued, and the identified one or more machine learning models is a machine learning model for detecting the underlying action in violation of the a law, rule, or ordinance in an audio and/or video stream. 5. The method of claim 4, wherein the underlying action is vehicular speeding above a speed limit set by the law, rule, or ordinance, the infraction issuance is issuance of a speeding ticket, and the identified one or more machine learning models is a machine learning model for detecting vehicular speeding in an audio and/or video stream. 6. The method of claim 4, wherein the underlying action is driving while impaired set by the law, rule, or ordinance, the infraction issuance is issuance of a driving while impaired ticket, and the identified one or more machine learning models is a machine learning model for detecting a vehicle driven while impaired in an audio and/or video stream. 7. The method of claim 4, wherein the underlying action is unlawful open possession of a weapon set by the law, rule, or ordinance, the infraction issuance is issuance of an unlawful open possession of a weapon ticket, and the identified one or more machine learning models is a machine learning model for detecting an unlawful open possession of a weapon in an audio and/or video stream. 8. The method of claim 1, wherein the in-field incident timeline information values include a detected entry of a new incident having a particular indicated incident type entered by a first officer involved in the new incident at a mobile computing device associated with the first officer, the particular event is the new incident having the particular indicated incident type, and the identified one or more machine learning models is a machine learning model for detecting an occurrence of incidents having the particular indicated incident type in an audio and/or video stream. 9. The method of claim 8, wherein the particular indicated incident type is one of a robbery, theft, abduction, kidnap, assault, battery, and hijacking. 10. The method of claim 1, wherein the first context information further comprises in-field sensor information and a second time associated with capture of the in-field sensor information, the electronic computing device using the in-field sensor information and the second time to validate the received in-field incident timeline information values prior to providing the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of the corresponding machine learning models. 11. The method of claim 1, wherein determining the geographic location associated with entry of the first context information comprises receiving a global positioning system (GPS) coordinate information from a mobile computing device executing the in-field incident timeline application and using the GPS coordinate information as the geographic location; and wherein the mobile computing device executing the in-field incident timeline application is one of a portable radio and a mobile radio. 12. The method of claim 1, wherein the identified one or more particular imaging cameras include respective audio capture devices and the retrieved one or more audio and/or video streams include one or more corresponding audio streams, the method further comprising: identifying, by the electronic computing device, one or more second machine learning training modules corresponding to one or more second machine learning models for detecting the particular event in audio streams; and providing, by the electronic computing device, the one or more corresponding audio streams to the identified one or more second machine learning training modules for further training of the corresponding second machine learning models. 13. The method of claim 1, wherein identifying the one or more particular imaging cameras that has or had a field of view of the determined geographic location during the time associated with the entry of the first context information comprises identifying a plurality of particular imaging cameras and having or that had a field of view of the determined geographic location during the time associated with the entry of the first context information, the plurality of particular imaging cameras having varying resolutions and/or frame rates. 14. The method of claim 1, wherein the time associated with the entry of the first context information is a discrete point in time or is a particular window of time over which the first context information was entered or captured; and wherein retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras during the time associated with the entry of the first context information further comprises additionally retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras during an additional prior buffer time occurring before the time associated with the entry of the first context information and during an additional post buffer time occurring after the time associated with the entry of the first context information. 15. The method of claim 1, wherein each of the one or more machine learning training modules is a periodically executed re-training of the machine learning model via a stored collection of training data, and providing the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of the corresponding machine learning models comprises adding the one or more audio and/or video streams to a corresponding stored collection of training data for the corresponding machine learning model. 16. The method of claim 1, wherein each of the one or more machine learning training modules is an on-demand executed re-training of the machine learning model via a stored collection of training data, and providing the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of the corresponding machine learning models comprises adding the one or more audio and/or video streams to a corresponding stored collection of training data to create a modified collection of training data for the corresponding machine learning model and transmitting an instruction to initiate re-training of the corresponding machine learning model using the modified collection of training data. 17. The method of claim 1, wherein identifying the one or more machine learning training modules corresponding to the one or more machine learning models for detecting the particular event in audio and/or video streams comprises accessing an event to machine learning model mapping that maps each of a plurality of events to corresponding one or more machine learning training modules by a unique identifier associated with each machine learning training module. 18. The method of claim 1, wherein retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras comprises accessing a digital evidence management system and retrieving the one or more audio and/or video streams using a time parameter determined as a function of the time associated with capture of the first context information and a camera parameter determined via the imaging camera location database. 19. The method of claim 1, wherein retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras comprises accessing one or more live video transport streams at network locations for the one or more particular imaging cameras as retrieved from the imaging camera location database and locally storing the one or more audio and/or video streams. 20. An electronic computing device implementing an adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging, the electronic computing device comprising: a memory storing non-transitory computer-readable instructions; a transceiver; and one or more processors configured to, in response to executing the non-transitory computer-readable instructions, perform a first set of functions comprising: receive, via the transceiver, first context information including one or more entered in-field incident timeline information values from an in-field incident timeline application and a time associated with an entry of the one or more first context information values at the in-field incident timeline application; access an incident timeline information to detectable event mapping that maps in-field incident timeline information values to events having a pre-determined threshold confidence of occurring; identify, via the incident timeline information to detectable event mapping using the first context information, a particular event associated with the received first context information; determine a geographic location associated with the entry of the first context information and a time period relative to the time associated with the entry of the first context information; access an imaging camera location database and identify, via the imaging camera location database, one or more particular imaging cameras that has or had a field of view including the determined geographic location during the determined time period; retrieve one or more audio and/or video streams captured by the one or more particular imaging cameras during the determined time period; identify one or more machine learning training modules corresponding to one or more machine learning models for detecting the particular event in and/or video streams; and provide the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of corresponding machine learning models.
Receive first context information (FCI) including entered in-field incident timeline information values from an in-field incident timeline application and a time associated with an entry of the FCI values. Access a mapping that maps in-field incident timeline information values to events having a pre-determined threshold confidence of occurring and identify an event associated with the received FCI. Determine a location associated with the entry of the FCI and a time period associated with the entry of the FCI. Access a camera location database and identify cameras that have a field of view including the location during the time period. Retrieve audio and/or video streams captured by the cameras during the time period. And provide the audio and/or video streams to machine learning training modules corresponding to machine learning models for detecting the event in and/or video streams for further training of the machine learning models.1. A method at an electronic computing device for adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging, the method comprising: receiving, at the electronic computing device, first context information including one or more entered in-field incident timeline information values from an in-field incident timeline application and a time associated with an entry of the including one or more first context information values at the in-field incident timeline application; accessing, by the electronic computing device, an incident timeline information to detectable event mapping that maps in-field incident timeline information values to events having a pre-determined threshold confidence of occurring; identifying, by the electronic computing device, via the incident timeline information to detectable event mapping using the first context information, a particular event associated with the received first context information; determining, by the electronic computing device, a geographic location associated with the entry of the first context information and a time period relative to the time associated with the entry of the first context information; accessing, by the electronic computing device, an imaging camera location database and identifying, via the imaging camera location database, one or more particular imaging cameras that has or had a field of view including the determined geographic location during the determined time period; retrieving, by the electronic computing device, one or more audio and/or video streams captured by the one or more particular imaging cameras during the determined time period; identifying, by the electronic computing device, one or more machine learning training modules corresponding to one or more machine learning models for detecting the particular event in and/or video streams; and providing, by the electronic computing device, the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of corresponding machine learning models. 2. The method of claim 1, wherein at least a first particular imaging camera of the one or more particular imaging cameras is a body worn camera of a user assigned to or responding to the particular event. 3. The method of claim 1, wherein at least a first particular imaging camera of the one or more particular imaging cameras is a fixed camera unassociated with the particular event. 4. The method of claim 1, wherein the in-field incident timeline information values include a detected entry of an infraction issuance for violation of a law, rule, or ordinance by a first officer involved in the particular event at a mobile computing device associated with the first officer, the particular event is an underlying action in violation of the law, rule, or ordinance for which the violation is issued, and the identified one or more machine learning models is a machine learning model for detecting the underlying action in violation of the a law, rule, or ordinance in an audio and/or video stream. 5. The method of claim 4, wherein the underlying action is vehicular speeding above a speed limit set by the law, rule, or ordinance, the infraction issuance is issuance of a speeding ticket, and the identified one or more machine learning models is a machine learning model for detecting vehicular speeding in an audio and/or video stream. 6. The method of claim 4, wherein the underlying action is driving while impaired set by the law, rule, or ordinance, the infraction issuance is issuance of a driving while impaired ticket, and the identified one or more machine learning models is a machine learning model for detecting a vehicle driven while impaired in an audio and/or video stream. 7. The method of claim 4, wherein the underlying action is unlawful open possession of a weapon set by the law, rule, or ordinance, the infraction issuance is issuance of an unlawful open possession of a weapon ticket, and the identified one or more machine learning models is a machine learning model for detecting an unlawful open possession of a weapon in an audio and/or video stream. 8. The method of claim 1, wherein the in-field incident timeline information values include a detected entry of a new incident having a particular indicated incident type entered by a first officer involved in the new incident at a mobile computing device associated with the first officer, the particular event is the new incident having the particular indicated incident type, and the identified one or more machine learning models is a machine learning model for detecting an occurrence of incidents having the particular indicated incident type in an audio and/or video stream. 9. The method of claim 8, wherein the particular indicated incident type is one of a robbery, theft, abduction, kidnap, assault, battery, and hijacking. 10. The method of claim 1, wherein the first context information further comprises in-field sensor information and a second time associated with capture of the in-field sensor information, the electronic computing device using the in-field sensor information and the second time to validate the received in-field incident timeline information values prior to providing the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of the corresponding machine learning models. 11. The method of claim 1, wherein determining the geographic location associated with entry of the first context information comprises receiving a global positioning system (GPS) coordinate information from a mobile computing device executing the in-field incident timeline application and using the GPS coordinate information as the geographic location; and wherein the mobile computing device executing the in-field incident timeline application is one of a portable radio and a mobile radio. 12. The method of claim 1, wherein the identified one or more particular imaging cameras include respective audio capture devices and the retrieved one or more audio and/or video streams include one or more corresponding audio streams, the method further comprising: identifying, by the electronic computing device, one or more second machine learning training modules corresponding to one or more second machine learning models for detecting the particular event in audio streams; and providing, by the electronic computing device, the one or more corresponding audio streams to the identified one or more second machine learning training modules for further training of the corresponding second machine learning models. 13. The method of claim 1, wherein identifying the one or more particular imaging cameras that has or had a field of view of the determined geographic location during the time associated with the entry of the first context information comprises identifying a plurality of particular imaging cameras and having or that had a field of view of the determined geographic location during the time associated with the entry of the first context information, the plurality of particular imaging cameras having varying resolutions and/or frame rates. 14. The method of claim 1, wherein the time associated with the entry of the first context information is a discrete point in time or is a particular window of time over which the first context information was entered or captured; and wherein retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras during the time associated with the entry of the first context information further comprises additionally retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras during an additional prior buffer time occurring before the time associated with the entry of the first context information and during an additional post buffer time occurring after the time associated with the entry of the first context information. 15. The method of claim 1, wherein each of the one or more machine learning training modules is a periodically executed re-training of the machine learning model via a stored collection of training data, and providing the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of the corresponding machine learning models comprises adding the one or more audio and/or video streams to a corresponding stored collection of training data for the corresponding machine learning model. 16. The method of claim 1, wherein each of the one or more machine learning training modules is an on-demand executed re-training of the machine learning model via a stored collection of training data, and providing the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of the corresponding machine learning models comprises adding the one or more audio and/or video streams to a corresponding stored collection of training data to create a modified collection of training data for the corresponding machine learning model and transmitting an instruction to initiate re-training of the corresponding machine learning model using the modified collection of training data. 17. The method of claim 1, wherein identifying the one or more machine learning training modules corresponding to the one or more machine learning models for detecting the particular event in audio and/or video streams comprises accessing an event to machine learning model mapping that maps each of a plurality of events to corresponding one or more machine learning training modules by a unique identifier associated with each machine learning training module. 18. The method of claim 1, wherein retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras comprises accessing a digital evidence management system and retrieving the one or more audio and/or video streams using a time parameter determined as a function of the time associated with capture of the first context information and a camera parameter determined via the imaging camera location database. 19. The method of claim 1, wherein retrieving the one or more audio and/or video streams captured by the one or more particular imaging cameras comprises accessing one or more live video transport streams at network locations for the one or more particular imaging cameras as retrieved from the imaging camera location database and locally storing the one or more audio and/or video streams. 20. An electronic computing device implementing an adaptive training of machine learning models via detected in-field contextual incident timeline entry and associated located and retrieved digital audio and/or video imaging, the electronic computing device comprising: a memory storing non-transitory computer-readable instructions; a transceiver; and one or more processors configured to, in response to executing the non-transitory computer-readable instructions, perform a first set of functions comprising: receive, via the transceiver, first context information including one or more entered in-field incident timeline information values from an in-field incident timeline application and a time associated with an entry of the one or more first context information values at the in-field incident timeline application; access an incident timeline information to detectable event mapping that maps in-field incident timeline information values to events having a pre-determined threshold confidence of occurring; identify, via the incident timeline information to detectable event mapping using the first context information, a particular event associated with the received first context information; determine a geographic location associated with the entry of the first context information and a time period relative to the time associated with the entry of the first context information; access an imaging camera location database and identify, via the imaging camera location database, one or more particular imaging cameras that has or had a field of view including the determined geographic location during the determined time period; retrieve one or more audio and/or video streams captured by the one or more particular imaging cameras during the determined time period; identify one or more machine learning training modules corresponding to one or more machine learning models for detecting the particular event in and/or video streams; and provide the one or more audio and/or video streams to the identified one or more machine learning training modules for further training of corresponding machine learning models.
2,600
10,884
10,884
13,689,006
2,612
A set of instructions stored on at least one computer readable medium for running on a computer system. The set of instructions includes instructions for identifying edges of a structure displayed in multiple oblique images, instructions for determining three-dimensional information of the edges including position, orientation and length of the edges utilizing multiple oblique images from multiple cardinal directions, and instructions for determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges.
1. A set of instructions stored on at least one non-transitory computer readable medium for running on a computer system, comprising: a. instructions for receiving one or more electronic files of oblique images into one or more memory; b. instructions for identifying a structure having at least four sides within the one or more electronic files, the sides having edges; c. instructions for determining locations and orientations of the edges of the sides of the structure; d. instructions for determining, relative lengths of the sides of the structure utilizing the locations and orientations of edges of the sides of the structure to produce a series of line segments representing the sides of the structure, the line segments having a relative length and an orientation; and e. instructions for assembling the line segments based on their relative lengths and orientations to form a footprint of the structure. 2. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the instructions c.-e. are adapted to be executed without manual intervention. 3. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the instructions c. include instructions for receiving user input for determining the locations and orientations of the edges. 4. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, where the instructions are adapted to cause the computer system to generate a three-dimensional model of the structure utilizing the line segments. 5. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, further comprising at least one instruction for storing information indicative of the edges as line segments. 6. The set of instructions stored on the at least one non-transitory computer readable medium of claim 5, wherein the line segments have lengths, and further comprising instructions for providing a cumulative length of the line segments for the footprint of the structure. 7. The set of instructions stored on the at least one non-transitory computer readable medium of claim 5, wherein the line segments have lengths, and further comprising instructions for determining an area of the footprint of the structure. 8. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the set of instructions further comprises instructions for grouping the edges by relative position. 9. The set of instructions stored on the at least one non-transitory computer readable medium of claim 8, wherein the instructions for grouping the edges by relative position includes instructions for receiving user input to group edges by relative position. 10. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the edges include vertical edges and at least one horizontal edge and wherein the set of instructions includes instructions for determining vertices of the footprint. 11. The set of instructions stored on the at least one non-transitory computer readable medium of claim 10, wherein the set of instructions includes instructions for determining at least one horizontal edge extending a length between the vertical edges in determining at least one line segment of the footprint, the vertical edges having a top and a bottom, and the at least one horizontal edge being above the bottoms of the vertical edges. 12. The set of instructions stored on the at least one computer readable medium of claim 11, wherein the horizontal edge extending between the vertices extends the entire length between the vertices and wherein the at least one line segment of the footprint is determined with the at least one horizontal edge. 13. The set of instructions stored on the at least one computer readable medium of claim 11, wherein the horizontal edge extending between the vertices extends only a portion of the length between the vertices and wherein the at least one line segment of the footprint is determined with the at least one horizontal edge. 14. The set of instructions stored on the at least one computer readable medium of claim 10, where the set of instructions includes instructions for determining an angle between at least one vertical edge and at least one horizontal edge for determining at least one line segment of the footprint. 15. The set of instructions stored on the at least one computer readable medium of claim 1, wherein the set of instructions for determining at least one line segment forming a portion of a footprint utilizes wire frame data of the structure determined from the one or more electronic file of the oblique images. 16. A method, comprising the step of: making a set of instructions on a computer readable medium accessible to a processor of a computer system, the set of instructions including instructions for: identifying edges of a structure by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of at least one geo-referenced oblique image; determining three-dimensional information of the edges including position, orientation and relative lengths of the edges using multiple oblique images from multiple cardinal directions; and determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 17. A method, comprising the step of: selling and distributing a set of instructions stored on at least one computer readable medium for: identifying edges of a structure by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of at least one geo-referenced oblique image; determining three-dimensional information of the edges including position, orientation and relative length of the edges using multiple oblique images from multiple cardinal directions; and determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 18. A method comprising the step of: providing access to a set of instructions stored on a first computer readable medium for installation on a second computer readable medium associated with a user device, the set of instructions including instructions for: identifying edges of a structure by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of at least one geo-referenced oblique image; determining three-dimensional information of the edges including position, orientation and relative length of the edges using multiple oblique images from multiple cardinal directions; and determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 19. A computer system, comprising: at least one processor; one or more computer readable medium storing a set of instructions that when executed by the at least one processor causes the at least one processor to: identify edges of a structure displayed within one or more geo-referenced images by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of the one or more geo-referenced images; determine three-dimensional information of the edges including position, orientation and relative length of the edges utilizing the one or more geo-referenced images; and determine, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 20. The computer system of claim 19, wherein the edges include vertical edges and at least one horizontal edge and wherein the set of instructions when executed by the at least one processor further causes the at least one processor to determine vertices of the footprint. 21. The computer system of claim 20, wherein the set of instructions when executed by the at least one processor further causes the at least one processor to determine at least one horizontal edge extending a length between the vertical edges in determining at least one line segment of the footprint, the vertical edges having a top and a bottom, and the at least one horizontal edge being above the bottoms of the vertical edges.
A set of instructions stored on at least one computer readable medium for running on a computer system. The set of instructions includes instructions for identifying edges of a structure displayed in multiple oblique images, instructions for determining three-dimensional information of the edges including position, orientation and length of the edges utilizing multiple oblique images from multiple cardinal directions, and instructions for determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges.1. A set of instructions stored on at least one non-transitory computer readable medium for running on a computer system, comprising: a. instructions for receiving one or more electronic files of oblique images into one or more memory; b. instructions for identifying a structure having at least four sides within the one or more electronic files, the sides having edges; c. instructions for determining locations and orientations of the edges of the sides of the structure; d. instructions for determining, relative lengths of the sides of the structure utilizing the locations and orientations of edges of the sides of the structure to produce a series of line segments representing the sides of the structure, the line segments having a relative length and an orientation; and e. instructions for assembling the line segments based on their relative lengths and orientations to form a footprint of the structure. 2. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the instructions c.-e. are adapted to be executed without manual intervention. 3. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the instructions c. include instructions for receiving user input for determining the locations and orientations of the edges. 4. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, where the instructions are adapted to cause the computer system to generate a three-dimensional model of the structure utilizing the line segments. 5. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, further comprising at least one instruction for storing information indicative of the edges as line segments. 6. The set of instructions stored on the at least one non-transitory computer readable medium of claim 5, wherein the line segments have lengths, and further comprising instructions for providing a cumulative length of the line segments for the footprint of the structure. 7. The set of instructions stored on the at least one non-transitory computer readable medium of claim 5, wherein the line segments have lengths, and further comprising instructions for determining an area of the footprint of the structure. 8. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the set of instructions further comprises instructions for grouping the edges by relative position. 9. The set of instructions stored on the at least one non-transitory computer readable medium of claim 8, wherein the instructions for grouping the edges by relative position includes instructions for receiving user input to group edges by relative position. 10. The set of instructions stored on the at least one non-transitory computer readable medium of claim 1, wherein the edges include vertical edges and at least one horizontal edge and wherein the set of instructions includes instructions for determining vertices of the footprint. 11. The set of instructions stored on the at least one non-transitory computer readable medium of claim 10, wherein the set of instructions includes instructions for determining at least one horizontal edge extending a length between the vertical edges in determining at least one line segment of the footprint, the vertical edges having a top and a bottom, and the at least one horizontal edge being above the bottoms of the vertical edges. 12. The set of instructions stored on the at least one computer readable medium of claim 11, wherein the horizontal edge extending between the vertices extends the entire length between the vertices and wherein the at least one line segment of the footprint is determined with the at least one horizontal edge. 13. The set of instructions stored on the at least one computer readable medium of claim 11, wherein the horizontal edge extending between the vertices extends only a portion of the length between the vertices and wherein the at least one line segment of the footprint is determined with the at least one horizontal edge. 14. The set of instructions stored on the at least one computer readable medium of claim 10, where the set of instructions includes instructions for determining an angle between at least one vertical edge and at least one horizontal edge for determining at least one line segment of the footprint. 15. The set of instructions stored on the at least one computer readable medium of claim 1, wherein the set of instructions for determining at least one line segment forming a portion of a footprint utilizes wire frame data of the structure determined from the one or more electronic file of the oblique images. 16. A method, comprising the step of: making a set of instructions on a computer readable medium accessible to a processor of a computer system, the set of instructions including instructions for: identifying edges of a structure by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of at least one geo-referenced oblique image; determining three-dimensional information of the edges including position, orientation and relative lengths of the edges using multiple oblique images from multiple cardinal directions; and determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 17. A method, comprising the step of: selling and distributing a set of instructions stored on at least one computer readable medium for: identifying edges of a structure by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of at least one geo-referenced oblique image; determining three-dimensional information of the edges including position, orientation and relative length of the edges using multiple oblique images from multiple cardinal directions; and determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 18. A method comprising the step of: providing access to a set of instructions stored on a first computer readable medium for installation on a second computer readable medium associated with a user device, the set of instructions including instructions for: identifying edges of a structure by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of at least one geo-referenced oblique image; determining three-dimensional information of the edges including position, orientation and relative length of the edges using multiple oblique images from multiple cardinal directions; and determining, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 19. A computer system, comprising: at least one processor; one or more computer readable medium storing a set of instructions that when executed by the at least one processor causes the at least one processor to: identify edges of a structure displayed within one or more geo-referenced images by analyzing one or more electronic file stored in one or more non-transitory memory, the electronic file being indicative of the one or more geo-referenced images; determine three-dimensional information of the edges including position, orientation and relative length of the edges utilizing the one or more geo-referenced images; and determine, automatically, at least one line segment of a portion of a footprint of the structure utilizing at least one of the relative position and orientation of the edges. 20. The computer system of claim 19, wherein the edges include vertical edges and at least one horizontal edge and wherein the set of instructions when executed by the at least one processor further causes the at least one processor to determine vertices of the footprint. 21. The computer system of claim 20, wherein the set of instructions when executed by the at least one processor further causes the at least one processor to determine at least one horizontal edge extending a length between the vertical edges in determining at least one line segment of the footprint, the vertical edges having a top and a bottom, and the at least one horizontal edge being above the bottoms of the vertical edges.
2,600
10,885
10,885
15,235,626
2,654
In an active noise reducing headphone, a signal processor applies filters and control gains of both the feed-forward and feedback active noise cancellation signal paths. The signal processor is configured to apply first feed-forward filters to the feed-forward signal path and apply first feedback filters to the feedback signal path during a first operating mode providing effective cancellation of ambient sound, and to apply second feed-forward filters to the feed-forward signal path during a second operating mode providing active hear-through of ambient sounds with ambient naturalness.
1. A computer-implemented method comprising: receiving a first external audio signal from a microphone of a noise-reducing device; generating a noise-cancellation signal for cancelling the first external audio signal; causing a speaker of the noise-reducing device to generate audio corresponding to the noise-cancellation signal in an active noise reduction (ANR) mode; receiving a second external audio signal from the microphone of the noise-reducing device; determining that at least a first portion of the second external audio signal indicates that the noise-reducing device should transition from the ANR mode to a hear-through mode; transitioning to the hear-through mode, the transitioning comprises facilitating presentation of at least a second portion of the second external audio signal at the noise-reducing device by modifying noise cancellation for at least the second portion of the second external audio signal. 2. The method of claim 1, wherein the second external audio signal comprises a voice or an alarm. 3. The method of claim 1, wherein the second external audio signal comprises a voice of a user of the noise-reducing device. 4. The method of claim 1, wherein the hear-through mode comprises at least one of: passive monitoring, direct talk-through, and active hear-through. 5. The method of claim 1, wherein modifying noise cancellation for at least the second portion of the second external audio signal comprises: modifying one or more filter coefficients for at least one of: a feed-forward and feedback path of the noise-reducing device, wherein the modified filter coefficients result in less attenuation of sounds within the human speech band than of sounds outside of the human speech band. 6. The method of claim 5, wherein the modified filter coefficients cause a corresponding filter to have at least one right-half-plane zero in the vicinity of a transition between sounds within the human speech band and sounds outside of the human speech band. 7. The method of claim 3, wherein the noise-reducing device remains in the hear-through mode at least as long as the voice of the user of the noise-reducing device is detected and for a predetermined time period after the voice of the user of the noise-reducing device is last detected. 8. The method of claim 1, further comprising: selecting the hear-through mode from a plurality of hear-through modes. 9. The method of claim 8, wherein selecting the hear-through mode from a plurality of hear-through modes comprises: detecting a level of ambient noise; and selecting the hear-through mode from the plurality of hear-through modes based on the level of ambient noise. 10. A noise-reducing device comprising: an ear piece configured to couple to a wearer's ear, the ear piece providing passive attenuation of ambient sound into the wearer's ear; a first microphone acoustically coupled to an external environment and electrically coupled to a first active noise cancellation signal path having a first filter with configurable coefficients; an output transducer acoustically coupled to the wearer's ear canal when the ear piece is coupled to the wearer's ear and electrically coupled to the first active noise cancellation signal path; and a signal processor configured to apply the coefficients of the first filter, wherein: in an active noise reduction (ANR) mode, the first microphone is configured to receive a first external audio signal, the first active noise cancellation signal path is configured to generate a noise cancellation signal for cancelling the first external audio signal, and the output transducer is configured to generate audio corresponding to the noise cancellation signal; the first microphone is configured to receive a second external audio signal; and the signal processor is configured to: determine that at least a first portion of the second external audio signal indicates that the noise-reducing device should transition from the ANR mode to a hear-through mode; and transition the noise-reducing device to the hear-through mode, the transition comprises facilitating presentation of at least a second portion of the second external audio signal at the noise-reducing device by modifying noise cancellation for at least the second portion of the second external audio signal. 11. The noise-reducing device of claim 10, wherein the second external audio signal comprises a voice or an alarm. 12. The noise-reducing device of claim 10, wherein the second external audio signal comprises a voice of the wearer of the noise-reducing device. 13. The noise-reducing device of claim 10, wherein the hear-through mode comprises at least one of: passive monitoring, direct talk-through, and active hear-through. 14. The noise-reducing device of claim 10, wherein modifying noise cancellation for at least the second portion of the second external audio signal comprises: modifying one or more of the coefficients applied to the first filter, wherein the modified filter coefficients result in less attenuation of sounds within the human speech band than of sounds outside of the human speech band. 15. The noise-reducing device of claim 14, wherein the modified filter coefficients cause the first filter to have at least one right-half-plane zero in the vicinity of a transition between sounds within the human speech band and sounds outside of the human speech band. 16. The noise-reducing device of claim 12, wherein the signal processor causes the noise-reducing device to remain in the hear-through mode at least as long as the voice of the wearer of the noise-reducing device is detected and for a predetermined time period after the voice of the user of the noise-reducing device is last detected. 17. The noise-reducing device of claim 10, wherein the hear-through mode is selected from a plurality of hear-through modes. 18. The noise-reducing device of claim 17, wherein the hear-through mode is selected from the plurality of hear-through modes based on a level of ambient noise. 19. The noise-reducing device of claim 10, further comprising: a second microphone acoustically coupled to the wearer's ear canal when the ear piece is coupled to the wearer's ear and electrically coupled to a second active noise cancellation signal path having a second filter with configurable coefficients, wherein the output transducer is also electrically coupled to the second active noise cancellation signal path, the signal processor is further configured to apply the coefficients of the second filter, and wherein modifying noise cancellation for at least the second portion of the second external audio signal comprises modifying the coefficients of the second filter. 20. The noise-reducing device of claim 10, further comprising a wireless connection to an audio source. 21. The noise-reducing device of claim 10, further comprising a visual indicator, the visual indicator configured to be in a first state that indicates the noise-reducing device is in the ANR mode, and a second state that indicates the noise-reducing device is in the hear-through mode. 22. An active noise reducing headphone comprising: an audio input unit configured to receive an external audio signal; a noise cancellation unit configured to perform noise cancellation using a first portion of the external audio signal in an active noise reducing (ANR) mode; and a processor configured to: determine that a second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to an active hear-through mode; transition the active noise reducing headphone to the active hear-through mode, the transition comprising facilitating presentation of at least a third portion of the external audio signal at the active noise reducing headphone by modifying noise cancellation for at least the third portion of the external audio signal. 23. The active noise reducing headphone of claim 22, wherein the processor determines that the second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to the active hear-through mode based on a user input. 24. The active noise reducing headphone of claim 22, wherein the processor determines that the second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to the active hear-through mode based on detecting a voice of the wearer of the active noise reducing headphone. 25. The active noise reducing headphone of claim 22, wherein the processor determines that the second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to the active hear-through mode based on detecting a voice of someone other than the wearer of the active noise reducing headphone. 26. The active noise reducing headphone of claim 22, wherein modifying noise cancellation for at least the third portion of the external audio signal comprises: modifying one or more coefficients applied to a filter in the noise cancellation unit, wherein the modified filter coefficients result in less attenuation of sounds within the human speech band than of sounds outside of the human speech band. 27. The active noise reducing headphone of claim 26, wherein the modified filter coefficients cause the filter to have at least one right-half-plane zero in the vicinity of a transition between sounds within the human speech band and sounds outside of the human speech band. 28. The active noise reducing headphone of claim 23, wherein the processor causes the active noise reducing headphone to remain in the active hear-through mode at least until an additional user input is received. 29. The active noise reducing headphone of claim 24, wherein the processor causes the active noise reducing headphone to remain in the active hear-through mode at least as long as the voice of the wearer of the active noise reducing headphone is detected and for a predetermined time period after the voice of the wearer of the noise-reducing device is last detected.
In an active noise reducing headphone, a signal processor applies filters and control gains of both the feed-forward and feedback active noise cancellation signal paths. The signal processor is configured to apply first feed-forward filters to the feed-forward signal path and apply first feedback filters to the feedback signal path during a first operating mode providing effective cancellation of ambient sound, and to apply second feed-forward filters to the feed-forward signal path during a second operating mode providing active hear-through of ambient sounds with ambient naturalness.1. A computer-implemented method comprising: receiving a first external audio signal from a microphone of a noise-reducing device; generating a noise-cancellation signal for cancelling the first external audio signal; causing a speaker of the noise-reducing device to generate audio corresponding to the noise-cancellation signal in an active noise reduction (ANR) mode; receiving a second external audio signal from the microphone of the noise-reducing device; determining that at least a first portion of the second external audio signal indicates that the noise-reducing device should transition from the ANR mode to a hear-through mode; transitioning to the hear-through mode, the transitioning comprises facilitating presentation of at least a second portion of the second external audio signal at the noise-reducing device by modifying noise cancellation for at least the second portion of the second external audio signal. 2. The method of claim 1, wherein the second external audio signal comprises a voice or an alarm. 3. The method of claim 1, wherein the second external audio signal comprises a voice of a user of the noise-reducing device. 4. The method of claim 1, wherein the hear-through mode comprises at least one of: passive monitoring, direct talk-through, and active hear-through. 5. The method of claim 1, wherein modifying noise cancellation for at least the second portion of the second external audio signal comprises: modifying one or more filter coefficients for at least one of: a feed-forward and feedback path of the noise-reducing device, wherein the modified filter coefficients result in less attenuation of sounds within the human speech band than of sounds outside of the human speech band. 6. The method of claim 5, wherein the modified filter coefficients cause a corresponding filter to have at least one right-half-plane zero in the vicinity of a transition between sounds within the human speech band and sounds outside of the human speech band. 7. The method of claim 3, wherein the noise-reducing device remains in the hear-through mode at least as long as the voice of the user of the noise-reducing device is detected and for a predetermined time period after the voice of the user of the noise-reducing device is last detected. 8. The method of claim 1, further comprising: selecting the hear-through mode from a plurality of hear-through modes. 9. The method of claim 8, wherein selecting the hear-through mode from a plurality of hear-through modes comprises: detecting a level of ambient noise; and selecting the hear-through mode from the plurality of hear-through modes based on the level of ambient noise. 10. A noise-reducing device comprising: an ear piece configured to couple to a wearer's ear, the ear piece providing passive attenuation of ambient sound into the wearer's ear; a first microphone acoustically coupled to an external environment and electrically coupled to a first active noise cancellation signal path having a first filter with configurable coefficients; an output transducer acoustically coupled to the wearer's ear canal when the ear piece is coupled to the wearer's ear and electrically coupled to the first active noise cancellation signal path; and a signal processor configured to apply the coefficients of the first filter, wherein: in an active noise reduction (ANR) mode, the first microphone is configured to receive a first external audio signal, the first active noise cancellation signal path is configured to generate a noise cancellation signal for cancelling the first external audio signal, and the output transducer is configured to generate audio corresponding to the noise cancellation signal; the first microphone is configured to receive a second external audio signal; and the signal processor is configured to: determine that at least a first portion of the second external audio signal indicates that the noise-reducing device should transition from the ANR mode to a hear-through mode; and transition the noise-reducing device to the hear-through mode, the transition comprises facilitating presentation of at least a second portion of the second external audio signal at the noise-reducing device by modifying noise cancellation for at least the second portion of the second external audio signal. 11. The noise-reducing device of claim 10, wherein the second external audio signal comprises a voice or an alarm. 12. The noise-reducing device of claim 10, wherein the second external audio signal comprises a voice of the wearer of the noise-reducing device. 13. The noise-reducing device of claim 10, wherein the hear-through mode comprises at least one of: passive monitoring, direct talk-through, and active hear-through. 14. The noise-reducing device of claim 10, wherein modifying noise cancellation for at least the second portion of the second external audio signal comprises: modifying one or more of the coefficients applied to the first filter, wherein the modified filter coefficients result in less attenuation of sounds within the human speech band than of sounds outside of the human speech band. 15. The noise-reducing device of claim 14, wherein the modified filter coefficients cause the first filter to have at least one right-half-plane zero in the vicinity of a transition between sounds within the human speech band and sounds outside of the human speech band. 16. The noise-reducing device of claim 12, wherein the signal processor causes the noise-reducing device to remain in the hear-through mode at least as long as the voice of the wearer of the noise-reducing device is detected and for a predetermined time period after the voice of the user of the noise-reducing device is last detected. 17. The noise-reducing device of claim 10, wherein the hear-through mode is selected from a plurality of hear-through modes. 18. The noise-reducing device of claim 17, wherein the hear-through mode is selected from the plurality of hear-through modes based on a level of ambient noise. 19. The noise-reducing device of claim 10, further comprising: a second microphone acoustically coupled to the wearer's ear canal when the ear piece is coupled to the wearer's ear and electrically coupled to a second active noise cancellation signal path having a second filter with configurable coefficients, wherein the output transducer is also electrically coupled to the second active noise cancellation signal path, the signal processor is further configured to apply the coefficients of the second filter, and wherein modifying noise cancellation for at least the second portion of the second external audio signal comprises modifying the coefficients of the second filter. 20. The noise-reducing device of claim 10, further comprising a wireless connection to an audio source. 21. The noise-reducing device of claim 10, further comprising a visual indicator, the visual indicator configured to be in a first state that indicates the noise-reducing device is in the ANR mode, and a second state that indicates the noise-reducing device is in the hear-through mode. 22. An active noise reducing headphone comprising: an audio input unit configured to receive an external audio signal; a noise cancellation unit configured to perform noise cancellation using a first portion of the external audio signal in an active noise reducing (ANR) mode; and a processor configured to: determine that a second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to an active hear-through mode; transition the active noise reducing headphone to the active hear-through mode, the transition comprising facilitating presentation of at least a third portion of the external audio signal at the active noise reducing headphone by modifying noise cancellation for at least the third portion of the external audio signal. 23. The active noise reducing headphone of claim 22, wherein the processor determines that the second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to the active hear-through mode based on a user input. 24. The active noise reducing headphone of claim 22, wherein the processor determines that the second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to the active hear-through mode based on detecting a voice of the wearer of the active noise reducing headphone. 25. The active noise reducing headphone of claim 22, wherein the processor determines that the second portion of the external audio signal indicates that the active noise reducing headphone should transition from the ANR mode to the active hear-through mode based on detecting a voice of someone other than the wearer of the active noise reducing headphone. 26. The active noise reducing headphone of claim 22, wherein modifying noise cancellation for at least the third portion of the external audio signal comprises: modifying one or more coefficients applied to a filter in the noise cancellation unit, wherein the modified filter coefficients result in less attenuation of sounds within the human speech band than of sounds outside of the human speech band. 27. The active noise reducing headphone of claim 26, wherein the modified filter coefficients cause the filter to have at least one right-half-plane zero in the vicinity of a transition between sounds within the human speech band and sounds outside of the human speech band. 28. The active noise reducing headphone of claim 23, wherein the processor causes the active noise reducing headphone to remain in the active hear-through mode at least until an additional user input is received. 29. The active noise reducing headphone of claim 24, wherein the processor causes the active noise reducing headphone to remain in the active hear-through mode at least as long as the voice of the wearer of the active noise reducing headphone is detected and for a predetermined time period after the voice of the wearer of the noise-reducing device is last detected.
2,600
10,886
10,886
15,068,899
2,668
A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.
1. A method of forming a reduced noise image using a set of images, the method comprising: obtaining a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determining, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; determining weights for one or more of the transformed images using the determined measures of alignment; and combining a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image. 2. The method of claim 1 wherein the measure of alignment for a transformed image is a misalignment parameter τi determined as the sum, over all of the pixel positions (x,y) of the transformed image, of the absolute differences between the transformed image Wi(x, y) and the reference image Ir(x, y). 3. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image further includes the reference image. 4. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image does not include the reference image. 5. The method of claim 1 further comprising determining the transformations to apply to said at least some of the images, wherein for each of said at least some of the images the respective transformation is determined by: determining a set of points of the image which correspond to a predetermined set of points of the reference image; and determining parameters of the transformation for the image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the image and the corresponding points of the predetermined set of points of the reference image. 6. The method of claim 5 wherein the set of points of the image are determined using the Lucas Kanade Inverse algorithm. 7. The method of claim 6 wherein the Lucas Kanade Inverse algorithm is initialized using the results of a multiple kernel tracking technique. 8. The method of claim 7 wherein the multiple kernel tracking technique determines the positions of a set of candidate regions based on a similarity between a set of target regions and the set of candidate regions, wherein the target regions are respectively positioned over the positions of the predetermined set of points of the reference image, and wherein the determined positions of the set of candidate regions are used to initialize the Lucas Kanade Inverse algorithm. 9. The method of claim 1 further comprising, for each of the transformed images, determining whether the respective measure of alignment indicates that the alignment of the transformed image with the reference image is below a threshold alignment level, and in dependence thereon selectively including the transformed image as one of said one or more of the transformed images for which weights are determined. 10. The method of claim 1 wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence. 11. A processing module for forming a reduced noise image using a set of images, the processing module comprising: alignment logic configured to: obtain a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with the reference image from the set of images; and determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image. 12. The processing module of claim 11, wherein the measure of alignment for a transformed image is a misalignment parameter τi determined as the sum, over all of the pixel positions (x,y) of the transformed image, of the absolute differences between the transformed image Wi(x, y) and the reference image Ir(x, y). 13. The processing module of claim 11, further comprising selection logic configured to select one of the images of the set of images to be the reference image. 14. The processing module of claim 13, wherein the selection logic is configured to select one of the images of the set of images to be the reference image by: determining sharpness indications for the images of the set of images; and based on the determined sharpness indications, selecting the sharpest image from the set of images to be the reference image. 15. The processing module of claim 14, wherein the selection logic is further configured to discard an image such that it is not provided to the alignment logic if the determined sharpness indication for the image is below a sharpness threshold. 16. The processing module of claim 14, wherein the sharpness indications are sums of absolute values of image Laplacian estimates for the respective images. 17. The processing module of claim 11, further comprising motion correction logic configured to apply motion correction to the reduced noise image formed by the combining logic. 18. The processing module of claim 17, wherein the motion correction logic is configured to apply motion correction to the reduced noise image by: determining motion indications indicating levels of motion for areas of the reduced noise image; and mixing areas of the reduced noise image with corresponding areas of the reference image based on the motion indications to form a motion-corrected, reduced noise image. 19. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to: obtain from a set of images a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image. 20. A non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a processing module comprising: alignment logic configured to: obtain from a set of images a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; and determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.
A reduced noise image can be formed from a set of images. One of the images of the set can be selected to be a reference image and other images of the set are transformed such that they are better aligned with the reference image. A measure of the alignment of each image with the reference image is determined. At least some of the transformed images can then be combined using weights which depend on the alignment of the transformed image with the reference image to thereby form the reduced noise image. By weighting the images according to their alignment with the reference image the effects of misalignment between the images in the combined image are reduced. Furthermore, motion correction may be applied to the reduced noise image.1. A method of forming a reduced noise image using a set of images, the method comprising: obtaining a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determining, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; determining weights for one or more of the transformed images using the determined measures of alignment; and combining a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image. 2. The method of claim 1 wherein the measure of alignment for a transformed image is a misalignment parameter τi determined as the sum, over all of the pixel positions (x,y) of the transformed image, of the absolute differences between the transformed image Wi(x, y) and the reference image Ir(x, y). 3. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image further includes the reference image. 4. The method of claim 1 wherein said plurality of images which are combined to form the reduced noise image does not include the reference image. 5. The method of claim 1 further comprising determining the transformations to apply to said at least some of the images, wherein for each of said at least some of the images the respective transformation is determined by: determining a set of points of the image which correspond to a predetermined set of points of the reference image; and determining parameters of the transformation for the image based on an error metric which is indicative of an error between a transformation of at least some of the determined set of points of the image and the corresponding points of the predetermined set of points of the reference image. 6. The method of claim 5 wherein the set of points of the image are determined using the Lucas Kanade Inverse algorithm. 7. The method of claim 6 wherein the Lucas Kanade Inverse algorithm is initialized using the results of a multiple kernel tracking technique. 8. The method of claim 7 wherein the multiple kernel tracking technique determines the positions of a set of candidate regions based on a similarity between a set of target regions and the set of candidate regions, wherein the target regions are respectively positioned over the positions of the predetermined set of points of the reference image, and wherein the determined positions of the set of candidate regions are used to initialize the Lucas Kanade Inverse algorithm. 9. The method of claim 1 further comprising, for each of the transformed images, determining whether the respective measure of alignment indicates that the alignment of the transformed image with the reference image is below a threshold alignment level, and in dependence thereon selectively including the transformed image as one of said one or more of the transformed images for which weights are determined. 10. The method of claim 1 wherein the set of images comprises either: (i) a plurality of images captured in a burst mode, or (ii) a plurality of frames of a video sequence. 11. A processing module for forming a reduced noise image using a set of images, the processing module comprising: alignment logic configured to: obtain a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with the reference image from the set of images; and determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image. 12. The processing module of claim 11, wherein the measure of alignment for a transformed image is a misalignment parameter τi determined as the sum, over all of the pixel positions (x,y) of the transformed image, of the absolute differences between the transformed image Wi(x, y) and the reference image Ir(x, y). 13. The processing module of claim 11, further comprising selection logic configured to select one of the images of the set of images to be the reference image. 14. The processing module of claim 13, wherein the selection logic is configured to select one of the images of the set of images to be the reference image by: determining sharpness indications for the images of the set of images; and based on the determined sharpness indications, selecting the sharpest image from the set of images to be the reference image. 15. The processing module of claim 14, wherein the selection logic is further configured to discard an image such that it is not provided to the alignment logic if the determined sharpness indication for the image is below a sharpness threshold. 16. The processing module of claim 14, wherein the sharpness indications are sums of absolute values of image Laplacian estimates for the respective images. 17. The processing module of claim 11, further comprising motion correction logic configured to apply motion correction to the reduced noise image formed by the combining logic. 18. The processing module of claim 17, wherein the motion correction logic is configured to apply motion correction to the reduced noise image by: determining motion indications indicating levels of motion for areas of the reduced noise image; and mixing areas of the reduced noise image with corresponding areas of the reference image based on the motion indications to form a motion-corrected, reduced noise image. 19. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to: obtain from a set of images a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image. 20. A non-transitory computer readable storage medium having stored thereon a computer readable description of an integrated circuit that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture a processing module comprising: alignment logic configured to: obtain from a set of images a plurality of transformed images by applying respective transformations to at least some of the images of the set to bring them closer to alignment with a reference image from the set of images; and determine, for each of the transformed images, a respective measure of alignment of that transformed image with the reference image; and combining logic configured to: determine weights for one or more of the transformed images using the determined measures of alignment; and combine a plurality of images including said one or more of the transformed images using the determined weights to form a reduced noise image.
2,600
10,887
10,887
16,595,684
2,685
A codeset is described in a Public Codeset Communication Format (PCCF) as a format block including a plurality of fields having readily decipherable values, such as ASCII character values. One field is a mark/space information field that includes a sequence of mark time indicators and space time indicators for an operational signal of the codeset. A second field is a signal characteristic information field for the operational signal. Signal characteristic information may include carrier on/off information, repeat frame information, toggle control information, and last frame information. The PCCF is a codeset interchange format of general applicability.
1. A controlling device, comprising: a microcontroller; a transmitting device; an input device; and a memory storing an encrypted codeset information; wherein the microcontroller further comprises both a decryptor that decrypts a command generating information that is caused to be retrieved from the encrypted codeset information in response to an input received via use of the input device and a rendering engine that uses the command generating information as decrypted by the decryptor to generate a command signal for transmission by the transmitting device to a controllable device and wherein the command signal generated by the rendering engine has both a format recognizable by the controllable device and a data indicative of a functional operation of the controllable device that corresponds to the input received via use of the input device. 2. The controlling device as recited in claim 1, wherein the transmitting device comprises an infrared transmitting device. 3. The controlling device as recited in claim 1, wherein the input element comprises a hard key. 4. The controlling device as recited in claim 1, wherein the encrypted codeset information comprises a compressed format block having first data fields that each describe, via use of one or more characters taken from an alphabet, an input element of the controlling device and second data fields that each describe, via use of one or more characters taken from the alphabet, at least one function that is to be performed by a controllable device in response to an activation of the input element. 5. The controlling device as recited in claim 4, wherein the compressed format block further has a third data field that describes, via use of one or more characters selected from the alphabet, a type for the controllable device. 6. The controlling device as recited in claim 5, wherein the compressed format block further has a fourth data field that describes, via use of one or more characters selected from the alphabet, a type for the controllable device.
A codeset is described in a Public Codeset Communication Format (PCCF) as a format block including a plurality of fields having readily decipherable values, such as ASCII character values. One field is a mark/space information field that includes a sequence of mark time indicators and space time indicators for an operational signal of the codeset. A second field is a signal characteristic information field for the operational signal. Signal characteristic information may include carrier on/off information, repeat frame information, toggle control information, and last frame information. The PCCF is a codeset interchange format of general applicability.1. A controlling device, comprising: a microcontroller; a transmitting device; an input device; and a memory storing an encrypted codeset information; wherein the microcontroller further comprises both a decryptor that decrypts a command generating information that is caused to be retrieved from the encrypted codeset information in response to an input received via use of the input device and a rendering engine that uses the command generating information as decrypted by the decryptor to generate a command signal for transmission by the transmitting device to a controllable device and wherein the command signal generated by the rendering engine has both a format recognizable by the controllable device and a data indicative of a functional operation of the controllable device that corresponds to the input received via use of the input device. 2. The controlling device as recited in claim 1, wherein the transmitting device comprises an infrared transmitting device. 3. The controlling device as recited in claim 1, wherein the input element comprises a hard key. 4. The controlling device as recited in claim 1, wherein the encrypted codeset information comprises a compressed format block having first data fields that each describe, via use of one or more characters taken from an alphabet, an input element of the controlling device and second data fields that each describe, via use of one or more characters taken from the alphabet, at least one function that is to be performed by a controllable device in response to an activation of the input element. 5. The controlling device as recited in claim 4, wherein the compressed format block further has a third data field that describes, via use of one or more characters selected from the alphabet, a type for the controllable device. 6. The controlling device as recited in claim 5, wherein the compressed format block further has a fourth data field that describes, via use of one or more characters selected from the alphabet, a type for the controllable device.
2,600
10,888
10,888
14,751,684
2,628
A processing unit, comprising a display interface to control a foldable display with multiple segments created by fold lines in the foldable display. The processing unit also including a plurality of lanes to connect the display interface to the foldable display, where each segment of the foldable display is connected to a lane. The processing unit also including a multi-segment protocol component to instruct the display interface to drive data to each segment of the display through the plurality of lanes.
1. A processing unit, comprising: a display interface to control a foldable display with multiple segments created by fold lines in the foldable display; a plurality of lanes to connect the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and a multi-segment protocol component to instruct the display interface to drive data to each segment of the display through the plurality of lanes. 2. The processing unit of claim 1, wherein one of the lanes is connected to an additional segment of the foldable display that is a virtual segment. 3. The processing unit of claim 1, wherein one of plurality of lanes is not connected to a segment of the foldable display. 4. The processing unit of claim 1, wherein a segment of the foldable display is connected to multiple lanes. 5. The processing unit of claim 1, comprising a plurality of display interfaces wherein each of the plurality of lanes connects to a different display interface. 6. The processing unit of claim 1, wherein the multi-segment protocol component detects a fold event when a segment is folded and instructs the display interface to drive data based on the detected fold event. 7. The processing unit of claim 1, wherein the multi-segment protocol component detects when the foldable display is altered in orientation and instructs the display interface to drive data based on the detected altering of orientation. 8. The processing unit of claim 1, wherein the multi-segment protocol component detects a partial fold event and instructs the display interface to drive data based on the detected partial fold event. 9. The processing unit of claim 1, wherein the multi-segment protocol component detects a fold event and instructs the display interface to stop driving data to a segment based on the detected fold event. 10. The processing unit of claim 1, wherein foldable display comprises a display panel on the opposite side of a segment of the display that receives power and driven data only when instructed by the multi-segment protocol component based on a detected fold event. 11. A method for displaying an image on a two-fold foldable display comprising, comprising: controlling, with a display interface to, a foldable display with multiple segments created by fold lines in the foldable display; connecting, with a plurality of lanes, the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and instructing, with a multi-segment protocol component, the display interface to drive data to each segment of the display through the plurality of lanes. 12. The method of claim 11, comprising detecting a fold event when a segment is folded and instructing the display interface to drive data based on the detected fold event to maintain aspect ratio in the remaining visible area of the foldable panel. 13. The method of claim 11, comprising detecting when the foldable display is altered in orientation and instructing the display interface to drive data based on the detected altering of orientation. 14. The method of claim 11, comprising detecting a partial fold event and instruct instructing the display interface to drive data based on the detected partial fold event. 15. The method of claim 11, comprising detecting a fold event and instructing the display interface to stop driving data to a segment based on the detected fold event. 16. A tangible, computer-readable medium to store instructions that when executed by a processor cause an apparatus to: control, with a display interface to, a foldable display with multiple segments created by fold lines in the foldable display; connect, with a plurality of lanes, the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and instruct, with a multi-segment protocol component, the display interface to drive data to each segment of the display through the plurality of lanes. 17. The tangible, computer-readable medium of claim 16, wherein one of the lanes is connected to an additional segment of the foldable display that is a virtual segment. 18. The tangible, computer-readable medium of claim 16, wherein one of plurality of lanes is not connected to a segment of the foldable display. 19. The tangible, computer-readable medium of claim 16, wherein a segment of the foldable display is connected to multiple lanes. 20. The tangible, computer-readable medium of claim 16, comprising a plurality of display interfaces wherein each of the plurality of lanes connects to a different display interface. 21. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects a fold event when a segment is folded and instructs the display interface to drive data based on the detected fold event. 22. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects when the foldable display is altered in orientation and instructs the display interface to drive data based on the detected altering of orientation. 23. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects a partial fold event and instructs the display interface to drive data based on the detected partial fold event. 24. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects a fold event and instructs the display interface to stop driving data to a segment based on the detected fold event. 25. The tangible, computer-readable medium of claim 16, wherein foldable display comprises a display panel on the opposite side of a segment of the display that receives power and driven data only when instructed by the multi-segment protocol component based on a detected fold event. 26. A system for displaying images on a foldable display comprising: a processor; a foldable display with multiple segments created by fold lines in the foldable display; a display interface to control the foldable display; a plurality of lanes to connect the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and a multi-segment protocol component to instruct the display interface to drive data to each segment of the display through the plurality of lanes. 27. The system of claim 26, wherein one of the lanes is connected to an additional segment of the foldable display that is a virtual segment. 28. The system of claim 26, wherein one of plurality of lanes is not connected to a segment of the foldable display. 29. The system of claim 26, wherein a segment of the foldable display is connected to multiple lanes. 30. The system of claim 26, comprising a plurality of display interfaces wherein each of the plurality of lanes connects to a different display interface. 31. The system of claim 26, wherein the multi-segment protocol component detects a fold event when a segment is folded and instructs the display interface to drive data based on the detected fold event. 32. The system of claim 26, wherein the multi-segment protocol component detects when the foldable display is altered in orientation and instructs the display interface to drive data based on the detected altering of orientation. 33. The system of claim 26, wherein the multi-segment protocol component detects a partial fold event and instructs the display interface to drive data based on the detected partial fold event. 34. The system of claim 26, wherein the multi-segment protocol component detects a fold event and instructs the display interface to stop driving data to a segment based on the detected fold event. 35. The system of claim 26, wherein foldable display comprises a display panel on the opposite side of a segment of the display that receives power and driven data only when instructed by the multi-segment protocol component based on a detected fold event.
A processing unit, comprising a display interface to control a foldable display with multiple segments created by fold lines in the foldable display. The processing unit also including a plurality of lanes to connect the display interface to the foldable display, where each segment of the foldable display is connected to a lane. The processing unit also including a multi-segment protocol component to instruct the display interface to drive data to each segment of the display through the plurality of lanes.1. A processing unit, comprising: a display interface to control a foldable display with multiple segments created by fold lines in the foldable display; a plurality of lanes to connect the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and a multi-segment protocol component to instruct the display interface to drive data to each segment of the display through the plurality of lanes. 2. The processing unit of claim 1, wherein one of the lanes is connected to an additional segment of the foldable display that is a virtual segment. 3. The processing unit of claim 1, wherein one of plurality of lanes is not connected to a segment of the foldable display. 4. The processing unit of claim 1, wherein a segment of the foldable display is connected to multiple lanes. 5. The processing unit of claim 1, comprising a plurality of display interfaces wherein each of the plurality of lanes connects to a different display interface. 6. The processing unit of claim 1, wherein the multi-segment protocol component detects a fold event when a segment is folded and instructs the display interface to drive data based on the detected fold event. 7. The processing unit of claim 1, wherein the multi-segment protocol component detects when the foldable display is altered in orientation and instructs the display interface to drive data based on the detected altering of orientation. 8. The processing unit of claim 1, wherein the multi-segment protocol component detects a partial fold event and instructs the display interface to drive data based on the detected partial fold event. 9. The processing unit of claim 1, wherein the multi-segment protocol component detects a fold event and instructs the display interface to stop driving data to a segment based on the detected fold event. 10. The processing unit of claim 1, wherein foldable display comprises a display panel on the opposite side of a segment of the display that receives power and driven data only when instructed by the multi-segment protocol component based on a detected fold event. 11. A method for displaying an image on a two-fold foldable display comprising, comprising: controlling, with a display interface to, a foldable display with multiple segments created by fold lines in the foldable display; connecting, with a plurality of lanes, the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and instructing, with a multi-segment protocol component, the display interface to drive data to each segment of the display through the plurality of lanes. 12. The method of claim 11, comprising detecting a fold event when a segment is folded and instructing the display interface to drive data based on the detected fold event to maintain aspect ratio in the remaining visible area of the foldable panel. 13. The method of claim 11, comprising detecting when the foldable display is altered in orientation and instructing the display interface to drive data based on the detected altering of orientation. 14. The method of claim 11, comprising detecting a partial fold event and instruct instructing the display interface to drive data based on the detected partial fold event. 15. The method of claim 11, comprising detecting a fold event and instructing the display interface to stop driving data to a segment based on the detected fold event. 16. A tangible, computer-readable medium to store instructions that when executed by a processor cause an apparatus to: control, with a display interface to, a foldable display with multiple segments created by fold lines in the foldable display; connect, with a plurality of lanes, the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and instruct, with a multi-segment protocol component, the display interface to drive data to each segment of the display through the plurality of lanes. 17. The tangible, computer-readable medium of claim 16, wherein one of the lanes is connected to an additional segment of the foldable display that is a virtual segment. 18. The tangible, computer-readable medium of claim 16, wherein one of plurality of lanes is not connected to a segment of the foldable display. 19. The tangible, computer-readable medium of claim 16, wherein a segment of the foldable display is connected to multiple lanes. 20. The tangible, computer-readable medium of claim 16, comprising a plurality of display interfaces wherein each of the plurality of lanes connects to a different display interface. 21. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects a fold event when a segment is folded and instructs the display interface to drive data based on the detected fold event. 22. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects when the foldable display is altered in orientation and instructs the display interface to drive data based on the detected altering of orientation. 23. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects a partial fold event and instructs the display interface to drive data based on the detected partial fold event. 24. The tangible, computer-readable medium of claim 16, wherein the multi-segment protocol component detects a fold event and instructs the display interface to stop driving data to a segment based on the detected fold event. 25. The tangible, computer-readable medium of claim 16, wherein foldable display comprises a display panel on the opposite side of a segment of the display that receives power and driven data only when instructed by the multi-segment protocol component based on a detected fold event. 26. A system for displaying images on a foldable display comprising: a processor; a foldable display with multiple segments created by fold lines in the foldable display; a display interface to control the foldable display; a plurality of lanes to connect the display interface to the foldable display, where each segment of the foldable display is connected to a lane; and a multi-segment protocol component to instruct the display interface to drive data to each segment of the display through the plurality of lanes. 27. The system of claim 26, wherein one of the lanes is connected to an additional segment of the foldable display that is a virtual segment. 28. The system of claim 26, wherein one of plurality of lanes is not connected to a segment of the foldable display. 29. The system of claim 26, wherein a segment of the foldable display is connected to multiple lanes. 30. The system of claim 26, comprising a plurality of display interfaces wherein each of the plurality of lanes connects to a different display interface. 31. The system of claim 26, wherein the multi-segment protocol component detects a fold event when a segment is folded and instructs the display interface to drive data based on the detected fold event. 32. The system of claim 26, wherein the multi-segment protocol component detects when the foldable display is altered in orientation and instructs the display interface to drive data based on the detected altering of orientation. 33. The system of claim 26, wherein the multi-segment protocol component detects a partial fold event and instructs the display interface to drive data based on the detected partial fold event. 34. The system of claim 26, wherein the multi-segment protocol component detects a fold event and instructs the display interface to stop driving data to a segment based on the detected fold event. 35. The system of claim 26, wherein foldable display comprises a display panel on the opposite side of a segment of the display that receives power and driven data only when instructed by the multi-segment protocol component based on a detected fold event.
2,600
10,889
10,889
15,720,407
2,654
An integrated circuit for implementing at least a portion of a personal audio device may include a processing circuit to implement an adaptive filter having a response that generates an anti-noise signal to reduce the presence of the ambient audio sounds at an error microphone, implement a coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal by computing coefficients that determine the response of the adaptive filter to minimize the ambient audio sounds at the error microphone, and responsive to detecting a condition that triggers a reset of the adaptive filter, increment the coefficients in a plurality of steps from initial values of the coefficients at a time of triggering the reset to final values of the coefficients at a conclusion of the reset.
1. An integrated circuit for implementing at least a portion of a personal audio device, comprising: an output for providing a signal to a transducer including both a source audio signal for playback to a listener and an anti-noise signal for countering the effects of ambient audio sounds in an acoustic output of the transducer; an error microphone input for receiving an error microphone signal indicative of the output of the transducer and the ambient audio sounds at the transducer; and a processing circuit configured to: implement an adaptive filter having a response that generates the anti-noise signal to reduce the presence of the ambient audio sounds at the error microphone; implement a coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal by computing coefficients that determine the response of the adaptive filter to minimize the ambient audio sounds at the error microphone; and responsive to detecting a condition that triggers a reset of the adaptive filter, increment the coefficients in a plurality of steps from initial values of the coefficients at a time of triggering the reset to final values of the coefficients at a conclusion of the reset. 2. The integrated circuit of claim 1, wherein the condition comprises one or more of wind noise, scratching on a housing of the personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, and an excessive increase in a magnitude of the coefficients. 3. The integrated circuit of claim 1, wherein the adaptive filter comprises a feedback filter that generates at least a portion of the anti-noise signal by applying the response of the adaptive filter to the error microphone signal. 4. The integrated circuit of claim 1, wherein: the adaptive filter comprises a secondary path estimate filter configured to model an electro-acoustic path of the source audio signal and have a response that generates a secondary path estimate from the source audio signal; and the coefficient control block comprises a secondary path estimate coefficient control block that shapes the response of the secondary path estimate filter in conformity with the source audio signal and a playback corrected error by adapting the response of the secondary path estimate filter to minimize the playback corrected error, wherein the playback corrected error is based on a difference between the error microphone signal and the secondary path estimate. 5. The integrated circuit of claim 1, wherein: the integrated circuit further comprises a reference microphone input for receiving a reference microphone signal indicative of the ambient audio sounds; the adaptive filter comprises a feedforward filter having a response that generates the anti-noise signal from the reference signal to reduce the presence of the ambient audio sounds heard by the listener; and the coefficient control block comprises a feedforward coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal and the reference microphone signal to minimize the ambient audio sounds at the error microphone. 6. The integrated circuit of claim 5, wherein the condition comprises one or more of wind noise, scratching on a housing of the personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, a signal level of the reference microphone signal falling outside of a predetermined range, and an excessive increase in a magnitude of the coefficients. 7. The integrated circuit of claim 1, wherein the processing circuit is configured to increment the coefficients from the initial values to the final values by using a weighted moving average of the coefficients. 8. The integrated circuit of claim 1, wherein the processing circuit is configured to increment the coefficients from the initial values to the final values by using an additive average of the coefficients. 9. The integrated circuit of claim 1, wherein in each of the plurality of steps, a degree of change of the coefficients during such step is set by a configurable smoothing factor. 10. The integrated circuit of claim 9, wherein the configurable smoothing factor is set by a type of the condition that triggers the reset. 11. The integrated circuit of claim 1, wherein the plurality of steps occur over a configurable duration of time. 12. The integrated circuit of claim 1, wherein the final values of the coefficients comprise a set of pre-determined coefficients. 13. The integrated circuit of claim 12, wherein the set of pre-determined coefficients comprises a set of zero values. 14. The integrated circuit of claim 1, wherein incrementing the coefficients in the plurality of steps serves to gradually reset the coefficients. 15. A method comprising: receiving an error microphone signal indicative of an output of a transducer and ambient audio sounds at the transducer; generating an anti-noise signal for countering the effects of ambient audio sounds at an acoustic output of the transducer, wherein generating the anti-noise signal comprises: implementing an adaptive filter having a response that generates the anti-noise signal to reduce the presence of the ambient audio sounds in the error microphone signal; and implementing a coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal by computing coefficients that determine the response of the adaptive filter to minimize the ambient audio sounds at the error microphone; responsive to detecting a condition that triggers a reset of the adaptive filter, incrementing the coefficients in a plurality of steps from initial values of the coefficients at a time of triggering the reset to final values of the coefficients at a conclusion of the reset; and combining the anti-noise signal with a source audio signal to generate an audio signal provided to the transducer. 16. The method of claim 15, wherein the condition comprises one or more of wind noise, scratching on a housing of a personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, and an excessive increase in a magnitude of the coefficients. 17. The method of claim 15, wherein the adaptive filter comprises a feedback filter that generates at least a portion of the anti-noise signal by applying the response of the adaptive filter to the error microphone signal. 18. The method of claim 15, wherein: the adaptive filter comprises a secondary path estimate filter configured to model an electro-acoustic path of the source audio signal and have a response that generates a secondary path estimate from the source audio signal; and the coefficient control block comprises a secondary path estimate coefficient control block that shapes the response of the secondary path estimate filter in conformity with the source audio signal and a playback corrected error by adapting the response of the secondary path estimate filter to minimize the playback corrected error, wherein the playback corrected error is based on a difference between the error microphone signal and the secondary path estimate. 19. The method of claim 15, further comprising receiving a reference microphone signal indicative of the ambient audio sounds, wherein: the adaptive filter comprises a feedforward filter having a response that generates the anti-noise signal from the reference signal to reduce the presence of the ambient audio sounds heard by a listener; and the coefficient control block comprises a feedforward coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal and the reference microphone signal to minimize the ambient audio sounds at the error microphone. 20. The method of claim 19, wherein the condition comprises one or more of wind noise, scratching on a housing of a personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, a signal level of the reference microphone signal falling outside of a predetermined range, and an excessive increase in a magnitude of the coefficients. 21. The method of claim 15, further comprising incrementing the coefficients from the initial values to the final values by using a weighted moving average of the coefficients. 22. The method of claim 15, further comprising incrementing the coefficients from the initial values to the final values by using an additive average of the coefficients. 23. The method of claim 15, wherein in each of the plurality of steps, a degree of change of the coefficients during such step is set by a configurable smoothing factor. 24. The method of claim 23, wherein the configurable smoothing factor is set by a type of the condition that triggers the reset. 25. The method of claim 15, wherein the plurality of steps occur over a configurable duration of time. 26. The method of claim 15, wherein the final values of the coefficients comprise a set of pre-determined coefficients. 27. The method of claim 26, wherein the set of pre-determined coefficients comprises a set of zero values. 28. The method of claim 15, wherein incrementing the coefficients in the plurality of steps serves to gradually reset the coefficients.
An integrated circuit for implementing at least a portion of a personal audio device may include a processing circuit to implement an adaptive filter having a response that generates an anti-noise signal to reduce the presence of the ambient audio sounds at an error microphone, implement a coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal by computing coefficients that determine the response of the adaptive filter to minimize the ambient audio sounds at the error microphone, and responsive to detecting a condition that triggers a reset of the adaptive filter, increment the coefficients in a plurality of steps from initial values of the coefficients at a time of triggering the reset to final values of the coefficients at a conclusion of the reset.1. An integrated circuit for implementing at least a portion of a personal audio device, comprising: an output for providing a signal to a transducer including both a source audio signal for playback to a listener and an anti-noise signal for countering the effects of ambient audio sounds in an acoustic output of the transducer; an error microphone input for receiving an error microphone signal indicative of the output of the transducer and the ambient audio sounds at the transducer; and a processing circuit configured to: implement an adaptive filter having a response that generates the anti-noise signal to reduce the presence of the ambient audio sounds at the error microphone; implement a coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal by computing coefficients that determine the response of the adaptive filter to minimize the ambient audio sounds at the error microphone; and responsive to detecting a condition that triggers a reset of the adaptive filter, increment the coefficients in a plurality of steps from initial values of the coefficients at a time of triggering the reset to final values of the coefficients at a conclusion of the reset. 2. The integrated circuit of claim 1, wherein the condition comprises one or more of wind noise, scratching on a housing of the personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, and an excessive increase in a magnitude of the coefficients. 3. The integrated circuit of claim 1, wherein the adaptive filter comprises a feedback filter that generates at least a portion of the anti-noise signal by applying the response of the adaptive filter to the error microphone signal. 4. The integrated circuit of claim 1, wherein: the adaptive filter comprises a secondary path estimate filter configured to model an electro-acoustic path of the source audio signal and have a response that generates a secondary path estimate from the source audio signal; and the coefficient control block comprises a secondary path estimate coefficient control block that shapes the response of the secondary path estimate filter in conformity with the source audio signal and a playback corrected error by adapting the response of the secondary path estimate filter to minimize the playback corrected error, wherein the playback corrected error is based on a difference between the error microphone signal and the secondary path estimate. 5. The integrated circuit of claim 1, wherein: the integrated circuit further comprises a reference microphone input for receiving a reference microphone signal indicative of the ambient audio sounds; the adaptive filter comprises a feedforward filter having a response that generates the anti-noise signal from the reference signal to reduce the presence of the ambient audio sounds heard by the listener; and the coefficient control block comprises a feedforward coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal and the reference microphone signal to minimize the ambient audio sounds at the error microphone. 6. The integrated circuit of claim 5, wherein the condition comprises one or more of wind noise, scratching on a housing of the personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, a signal level of the reference microphone signal falling outside of a predetermined range, and an excessive increase in a magnitude of the coefficients. 7. The integrated circuit of claim 1, wherein the processing circuit is configured to increment the coefficients from the initial values to the final values by using a weighted moving average of the coefficients. 8. The integrated circuit of claim 1, wherein the processing circuit is configured to increment the coefficients from the initial values to the final values by using an additive average of the coefficients. 9. The integrated circuit of claim 1, wherein in each of the plurality of steps, a degree of change of the coefficients during such step is set by a configurable smoothing factor. 10. The integrated circuit of claim 9, wherein the configurable smoothing factor is set by a type of the condition that triggers the reset. 11. The integrated circuit of claim 1, wherein the plurality of steps occur over a configurable duration of time. 12. The integrated circuit of claim 1, wherein the final values of the coefficients comprise a set of pre-determined coefficients. 13. The integrated circuit of claim 12, wherein the set of pre-determined coefficients comprises a set of zero values. 14. The integrated circuit of claim 1, wherein incrementing the coefficients in the plurality of steps serves to gradually reset the coefficients. 15. A method comprising: receiving an error microphone signal indicative of an output of a transducer and ambient audio sounds at the transducer; generating an anti-noise signal for countering the effects of ambient audio sounds at an acoustic output of the transducer, wherein generating the anti-noise signal comprises: implementing an adaptive filter having a response that generates the anti-noise signal to reduce the presence of the ambient audio sounds in the error microphone signal; and implementing a coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal by computing coefficients that determine the response of the adaptive filter to minimize the ambient audio sounds at the error microphone; responsive to detecting a condition that triggers a reset of the adaptive filter, incrementing the coefficients in a plurality of steps from initial values of the coefficients at a time of triggering the reset to final values of the coefficients at a conclusion of the reset; and combining the anti-noise signal with a source audio signal to generate an audio signal provided to the transducer. 16. The method of claim 15, wherein the condition comprises one or more of wind noise, scratching on a housing of a personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, and an excessive increase in a magnitude of the coefficients. 17. The method of claim 15, wherein the adaptive filter comprises a feedback filter that generates at least a portion of the anti-noise signal by applying the response of the adaptive filter to the error microphone signal. 18. The method of claim 15, wherein: the adaptive filter comprises a secondary path estimate filter configured to model an electro-acoustic path of the source audio signal and have a response that generates a secondary path estimate from the source audio signal; and the coefficient control block comprises a secondary path estimate coefficient control block that shapes the response of the secondary path estimate filter in conformity with the source audio signal and a playback corrected error by adapting the response of the secondary path estimate filter to minimize the playback corrected error, wherein the playback corrected error is based on a difference between the error microphone signal and the secondary path estimate. 19. The method of claim 15, further comprising receiving a reference microphone signal indicative of the ambient audio sounds, wherein: the adaptive filter comprises a feedforward filter having a response that generates the anti-noise signal from the reference signal to reduce the presence of the ambient audio sounds heard by a listener; and the coefficient control block comprises a feedforward coefficient control block that shapes the response of the adaptive filter in conformity with the error microphone signal and the reference microphone signal to minimize the ambient audio sounds at the error microphone. 20. The method of claim 19, wherein the condition comprises one or more of wind noise, scratching on a housing of a personal audio device, a substantially tonal ambient sound, a divergence of the coefficients, a signal level of the reference microphone signal falling outside of a predetermined range, and an excessive increase in a magnitude of the coefficients. 21. The method of claim 15, further comprising incrementing the coefficients from the initial values to the final values by using a weighted moving average of the coefficients. 22. The method of claim 15, further comprising incrementing the coefficients from the initial values to the final values by using an additive average of the coefficients. 23. The method of claim 15, wherein in each of the plurality of steps, a degree of change of the coefficients during such step is set by a configurable smoothing factor. 24. The method of claim 23, wherein the configurable smoothing factor is set by a type of the condition that triggers the reset. 25. The method of claim 15, wherein the plurality of steps occur over a configurable duration of time. 26. The method of claim 15, wherein the final values of the coefficients comprise a set of pre-determined coefficients. 27. The method of claim 26, wherein the set of pre-determined coefficients comprises a set of zero values. 28. The method of claim 15, wherein incrementing the coefficients in the plurality of steps serves to gradually reset the coefficients.
2,600
10,890
10,890
15,740,750
2,647
Provided is a communication system or the like in which a service based on ProSe is implemented under the management of a network operator. Processing based on a discovery request procedure for discovering a proximity terminal or being discovered is performed based on authentication of a server device operated by the network operator. In addition, the network operator updates the processing based on the discovery request procedure in accordance with a policy of the network operator.
1. A terminal device comprising: a transmission/reception unit configured to transmit a first service authorization request message, to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or for the terminal device to perform direct communication with the proximity terminal device, to a device equipped with a ProSe function, and receive an authorization message for the first service authorization request message from the device equipped with the ProSe function; and a control unit configured to start, based on receiving a timer included in the authorization message and indicating a period of time during which the first service authorization is valid, counting of the timer. 2. The terminal device according to claim 1, wherein the transmission/reception unit is further configured to transmit a second service authorization request message in a case where the counting of the timer is ended. 3. The terminal device according to claim 1, wherein the transmission/reception unit is further configured to receive a first service code from the device equipped with the ProSe function, and the first service code is identification information of a connection service provided by the terminal device. 4. A device equipped with a ProSe function, comprising: a transmission/reception unit configured to receive a first service authorization request message from the terminal device and transmit an authorization message for the first service authorization request message to the terminal device, wherein the first service authorization request message is a message transmitted to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or performing direct communication with the proximity terminal device, and the authorization message includes a timer indicating a period of time during which the first service authorization is valid. 5. A communication control method for a terminal device, comprising the steps of: transmitting a first service authorization request message, to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or performing direct communication with the proximity terminal device, to a device equipped with a ProSe function; receiving an authorization message for the first service authorization request message from the device equipped with the ProSe function; and starting counting of a timer based on receiving a period of time during which the first service authorization is valid, the period of time being included in the authorization request message. 6. The communication control method for a terminal device according to claim 5, further comprising the step of: transmitting a second service authorization request message in a case that the counting of the timer is ended. 7. The communication control method for a terminal device according to claim 5, further comprising the step of: receiving a first service code from the device equipped with the ProSe function, wherein the first service code is identification information of a connection service provided by the terminal device. 8. A communication control method for a device equipped with a ProSe function, comprising the steps of: receiving a first service authorization request message from a terminal device; and transmitting an authorization message for the first service authorization request message to the terminal device, wherein the first service authorization request message is a message transmitted to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or perform direct communication with the proximity terminal device, and the authorization message includes a timer indicating a period of time during which the first service authorization is valid.
Provided is a communication system or the like in which a service based on ProSe is implemented under the management of a network operator. Processing based on a discovery request procedure for discovering a proximity terminal or being discovered is performed based on authentication of a server device operated by the network operator. In addition, the network operator updates the processing based on the discovery request procedure in accordance with a policy of the network operator.1. A terminal device comprising: a transmission/reception unit configured to transmit a first service authorization request message, to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or for the terminal device to perform direct communication with the proximity terminal device, to a device equipped with a ProSe function, and receive an authorization message for the first service authorization request message from the device equipped with the ProSe function; and a control unit configured to start, based on receiving a timer included in the authorization message and indicating a period of time during which the first service authorization is valid, counting of the timer. 2. The terminal device according to claim 1, wherein the transmission/reception unit is further configured to transmit a second service authorization request message in a case where the counting of the timer is ended. 3. The terminal device according to claim 1, wherein the transmission/reception unit is further configured to receive a first service code from the device equipped with the ProSe function, and the first service code is identification information of a connection service provided by the terminal device. 4. A device equipped with a ProSe function, comprising: a transmission/reception unit configured to receive a first service authorization request message from the terminal device and transmit an authorization message for the first service authorization request message to the terminal device, wherein the first service authorization request message is a message transmitted to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or performing direct communication with the proximity terminal device, and the authorization message includes a timer indicating a period of time during which the first service authorization is valid. 5. A communication control method for a terminal device, comprising the steps of: transmitting a first service authorization request message, to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or performing direct communication with the proximity terminal device, to a device equipped with a ProSe function; receiving an authorization message for the first service authorization request message from the device equipped with the ProSe function; and starting counting of a timer based on receiving a period of time during which the first service authorization is valid, the period of time being included in the authorization request message. 6. The communication control method for a terminal device according to claim 5, further comprising the step of: transmitting a second service authorization request message in a case that the counting of the timer is ended. 7. The communication control method for a terminal device according to claim 5, further comprising the step of: receiving a first service code from the device equipped with the ProSe function, wherein the first service code is identification information of a connection service provided by the terminal device. 8. A communication control method for a device equipped with a ProSe function, comprising the steps of: receiving a first service authorization request message from a terminal device; and transmitting an authorization message for the first service authorization request message to the terminal device, wherein the first service authorization request message is a message transmitted to request permission for the terminal device to be directly detected by a proximity terminal device positioned in proximity as a relay terminal device or perform direct communication with the proximity terminal device, and the authorization message includes a timer indicating a period of time during which the first service authorization is valid.
2,600
10,891
10,891
15,779,190
2,628
Provided is a source driver that is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of a display unit, the source driver including: a resistor having an end to which a predetermined power supply voltage is applied; and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor.
1. A source driver, which is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of a display unit, the source driver comprising: a resistor having an end to which a predetermined power supply voltage is applied; and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor. 2. The source driver according to claim 1, wherein the other end of the resistor and the current source are connected to each other via the data line. 3. The source driver according to claim 1, wherein a connection point of the other end of the resistor and the current source is connected to the data line. 4. The source driver according to claim 1, wherein the amount of current of the current source is controlled by an output voltage of a D/A converter unit that outputs the voltage corresponding to the gradation value of the video signal. 5. The source driver according to claim 4, further comprising a selector circuit that causes a plurality of current sources to correspond to a single output unit of the D/A converter unit. 6. The source driver according to claim 5, wherein each of the plurality of current sources includes a capacitor unit that holds a voltage supplied from the D/A converter unit. 7. The source driver according to claim 1, wherein the current source includes a transistor. 8. The source driver according to claim 7, wherein the current source includes a field-effect transistor and further includes a correction circuit that corrects variations in characteristics of the field-effect transistor. 9. The source driver according to claim 8, wherein the correction circuit performs correction corresponding to a value of a threshold voltage of the field-effect transistor. 10. The source driver according to claim 9, wherein the correction circuit causes the capacitor unit to hold a voltage that is corrected according to the value of the threshold voltage of the field-effect transistor, the capacitor unit being connected between a gate and a source of the field-effect transistor. 11. A display apparatus, comprising: a display unit; and a source driver that is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of the display unit, the source driver including a resistor having an end to which a predetermined power supply voltage is applied, and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor to the data line. 12. The display apparatus according to claim 11, wherein the display unit includes a display element that is configured to provide a blacker display as a voltage supplied to the data line approaches a predetermined power supply voltage. 13. The display apparatus according to claim 12, wherein the display element at least includes a current-driven light-emitting unit, a storage capacitor that holds a voltage supplied from the data line, and a drive transistor that provides a current corresponding to the voltage held by the storage capacitor to the light-emitting unit. 14. An electronic apparatus, comprising a display apparatus that includes a display unit, and a source driver that is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of the display unit, the source driver including a resistor having an end to which a predetermined power supply voltage is applied, and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor to the data line.
Provided is a source driver that is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of a display unit, the source driver including: a resistor having an end to which a predetermined power supply voltage is applied; and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor.1. A source driver, which is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of a display unit, the source driver comprising: a resistor having an end to which a predetermined power supply voltage is applied; and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor. 2. The source driver according to claim 1, wherein the other end of the resistor and the current source are connected to each other via the data line. 3. The source driver according to claim 1, wherein a connection point of the other end of the resistor and the current source is connected to the data line. 4. The source driver according to claim 1, wherein the amount of current of the current source is controlled by an output voltage of a D/A converter unit that outputs the voltage corresponding to the gradation value of the video signal. 5. The source driver according to claim 4, further comprising a selector circuit that causes a plurality of current sources to correspond to a single output unit of the D/A converter unit. 6. The source driver according to claim 5, wherein each of the plurality of current sources includes a capacitor unit that holds a voltage supplied from the D/A converter unit. 7. The source driver according to claim 1, wherein the current source includes a transistor. 8. The source driver according to claim 7, wherein the current source includes a field-effect transistor and further includes a correction circuit that corrects variations in characteristics of the field-effect transistor. 9. The source driver according to claim 8, wherein the correction circuit performs correction corresponding to a value of a threshold voltage of the field-effect transistor. 10. The source driver according to claim 9, wherein the correction circuit causes the capacitor unit to hold a voltage that is corrected according to the value of the threshold voltage of the field-effect transistor, the capacitor unit being connected between a gate and a source of the field-effect transistor. 11. A display apparatus, comprising: a display unit; and a source driver that is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of the display unit, the source driver including a resistor having an end to which a predetermined power supply voltage is applied, and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor to the data line. 12. The display apparatus according to claim 11, wherein the display unit includes a display element that is configured to provide a blacker display as a voltage supplied to the data line approaches a predetermined power supply voltage. 13. The display apparatus according to claim 12, wherein the display element at least includes a current-driven light-emitting unit, a storage capacitor that holds a voltage supplied from the data line, and a drive transistor that provides a current corresponding to the voltage held by the storage capacitor to the light-emitting unit. 14. An electronic apparatus, comprising a display apparatus that includes a display unit, and a source driver that is used for supplying a voltage corresponding to a gradation value of a video signal to a data line of the display unit, the source driver including a resistor having an end to which a predetermined power supply voltage is applied, and a current source that is connected to another end of the resistor, the amount of current of the current source being controlled according to the gradation value of the video signal, the voltage corresponding to the gradation value of the video signal being supplied from the other end of the resistor to the data line.
2,600
10,892
10,892
15,447,016
2,626
A display device having touch sensors and a method of driving the same are disclosed. The display device includes a display panel including a pixel array including pixels and a touch sensor array including touch sensors formed in the pixel array, the pixel array being divided into blocks, a gate driver to sequentially drive a plurality of gate lines in the pixel array in a block unit, a data driver to drive a plurality of data lines in the pixel array when the gate lines are driven, a touch controller to sequentially drive the touch sensor arrays in the block unit, and a timing controller to divide one frame into at least one display mode at which the pixel array is driven and at least one touch sensing mode at which the touch sensor array is driven and to control the gate drive, the data driver and the touch controller so that the display mode and the touch sensing mode alternate.
1-10. (canceled) 11. A display device, comprising: a display panel comprising: a plurality of gate lines; a plurality of data lines; a plurality of touch sensor arrays; and a plurality of pixel arrays comprising a transistor coupled to the gate line and the data line; a gate driver configured to drive the plurality of gate lines in the display panel; a data driver configured to drive the plurality of data lines in the display panel; and a touch controller configured to drive the plurality of touch sensor arrays in the display panel, wherein the display panel is configured for at least a first display mode and a first touch sensing mode, during one frame, wherein a display region of the display panel during the first display mode is smaller than a touch region of the display panel during the first touch sensing mode 12. The display device according to claim 1, wherein: the gate driver and the data driver are configured to drive the plurality of pixel array in the first display mode in response to a first level of a mode switching signal; and the touch controller is configured to drive the plurality of touch sensor array in the first touch sensing mode in response to a second level of the mode switching signal. 13. The display device according to claim 1, wherein the display panel comprises one of: a liquid crystal panel and an organic light-emitting diode display panel. 14. The display device according to claim 1, wherein: the plurality of touch sensor array comprise capacitive touch sensors; and each of the plurality of capacitive touch sensors is configured to detect a touch.
A display device having touch sensors and a method of driving the same are disclosed. The display device includes a display panel including a pixel array including pixels and a touch sensor array including touch sensors formed in the pixel array, the pixel array being divided into blocks, a gate driver to sequentially drive a plurality of gate lines in the pixel array in a block unit, a data driver to drive a plurality of data lines in the pixel array when the gate lines are driven, a touch controller to sequentially drive the touch sensor arrays in the block unit, and a timing controller to divide one frame into at least one display mode at which the pixel array is driven and at least one touch sensing mode at which the touch sensor array is driven and to control the gate drive, the data driver and the touch controller so that the display mode and the touch sensing mode alternate.1-10. (canceled) 11. A display device, comprising: a display panel comprising: a plurality of gate lines; a plurality of data lines; a plurality of touch sensor arrays; and a plurality of pixel arrays comprising a transistor coupled to the gate line and the data line; a gate driver configured to drive the plurality of gate lines in the display panel; a data driver configured to drive the plurality of data lines in the display panel; and a touch controller configured to drive the plurality of touch sensor arrays in the display panel, wherein the display panel is configured for at least a first display mode and a first touch sensing mode, during one frame, wherein a display region of the display panel during the first display mode is smaller than a touch region of the display panel during the first touch sensing mode 12. The display device according to claim 1, wherein: the gate driver and the data driver are configured to drive the plurality of pixel array in the first display mode in response to a first level of a mode switching signal; and the touch controller is configured to drive the plurality of touch sensor array in the first touch sensing mode in response to a second level of the mode switching signal. 13. The display device according to claim 1, wherein the display panel comprises one of: a liquid crystal panel and an organic light-emitting diode display panel. 14. The display device according to claim 1, wherein: the plurality of touch sensor array comprise capacitive touch sensors; and each of the plurality of capacitive touch sensors is configured to detect a touch.
2,600
10,893
10,893
16,409,305
2,675
Transmitting node and receiving node for audio coding and methods therein. The nodes being operable to encode/decode speech and to apply a discontinuous transmission (DTX) scheme comprising transmission/reception of Silence Insertion Descriptor (SID) frames during speech inactivity. The method in the transmitting node comprising determining, from amongst a number N of hangover frames, a set Y of frames being representative of background noise, and further transmitting the N hangover frames, comprising at least said set Y of frames, to the receiving node. The method further comprises transmitting a first SID frame to the receiving node in association with the transmission of the N hangover frames, where the SID frame comprises information indicating the determined set Y of hangover frames to the receiving node. The method enables the receiving node to generate comfort noise based on the hangover frames most adequate for the purpose.
1. A method performed by a receiving node operable to decode speech and to apply a discontinuous transmission (DTX) scheme comprising reception of silence insertion descriptor (SID) frames and generation of comfort noise during speech inactivity, the method comprising: receiving a group of hangover frames transmitted by an encoder; receiving a first SID frame transmitted by the encoder, wherein the SID frame comprises a counter value equal to the number of hangover frames included in the group of hangover frames; and generating comfort noise based on the counter value. 2. The method of claim 1, wherein the first SID frame further comprises SID parameters. 3. The method of claim 1, wherein the number of hangover frames included in the group of hangover frames is dynamically variable based on properties of an input audio signal. 4. A receiving node operable to decode speech and to apply a discontinuous transmission (DTX) scheme comprising receiving of silence insertion descriptor (SID) frames and generation of comfort noise during speech inactivity, the receiving node comprising: a receiver; and a data processing system, the data processing system being operative to: obtain a group of hangover frames transmitted by an encoder; obtain a first SID frame transmitted by the encoder, wherein the SID frame comprises a counter value equal to the number of hangover frames included in the group of hangover frames; and generate comfort noise based on the counter value. 5. The receiving node of claim 4, wherein the data processing system comprise a processor and a memory and wherein said memory is containing instructions executable by said processor. 6. The receiving node of claim 4, wherein the first SID frame further comprises SID parameters. 7. The receiving node of claim 4, wherein the number of hangover frames included in the group of hangover frames is dynamically variable based on properties of an input audio signal. 8. A computer program product (CPP), the CPP comprising a non-transitory computer readable medium storing computer program code which when run in a receiving node causes the receiving node to: obtain a group of hangover frames transmitted by an encoder; obtain a first silence insertion descriptor (SID) frame transmitted by the encoder, wherein the SID frame comprises a counter value equal to the number of hangover frames included in the group of hangover frames; and generate comfort noise based on the counter value. 9. The CPP of claim 8, wherein the first SID frame further comprises SID parameters. 10. The CPP of claim 8, wherein the number of hangover frames included in the group of hangover frames is dynamically variable based on properties of an input audio signal.
Transmitting node and receiving node for audio coding and methods therein. The nodes being operable to encode/decode speech and to apply a discontinuous transmission (DTX) scheme comprising transmission/reception of Silence Insertion Descriptor (SID) frames during speech inactivity. The method in the transmitting node comprising determining, from amongst a number N of hangover frames, a set Y of frames being representative of background noise, and further transmitting the N hangover frames, comprising at least said set Y of frames, to the receiving node. The method further comprises transmitting a first SID frame to the receiving node in association with the transmission of the N hangover frames, where the SID frame comprises information indicating the determined set Y of hangover frames to the receiving node. The method enables the receiving node to generate comfort noise based on the hangover frames most adequate for the purpose.1. A method performed by a receiving node operable to decode speech and to apply a discontinuous transmission (DTX) scheme comprising reception of silence insertion descriptor (SID) frames and generation of comfort noise during speech inactivity, the method comprising: receiving a group of hangover frames transmitted by an encoder; receiving a first SID frame transmitted by the encoder, wherein the SID frame comprises a counter value equal to the number of hangover frames included in the group of hangover frames; and generating comfort noise based on the counter value. 2. The method of claim 1, wherein the first SID frame further comprises SID parameters. 3. The method of claim 1, wherein the number of hangover frames included in the group of hangover frames is dynamically variable based on properties of an input audio signal. 4. A receiving node operable to decode speech and to apply a discontinuous transmission (DTX) scheme comprising receiving of silence insertion descriptor (SID) frames and generation of comfort noise during speech inactivity, the receiving node comprising: a receiver; and a data processing system, the data processing system being operative to: obtain a group of hangover frames transmitted by an encoder; obtain a first SID frame transmitted by the encoder, wherein the SID frame comprises a counter value equal to the number of hangover frames included in the group of hangover frames; and generate comfort noise based on the counter value. 5. The receiving node of claim 4, wherein the data processing system comprise a processor and a memory and wherein said memory is containing instructions executable by said processor. 6. The receiving node of claim 4, wherein the first SID frame further comprises SID parameters. 7. The receiving node of claim 4, wherein the number of hangover frames included in the group of hangover frames is dynamically variable based on properties of an input audio signal. 8. A computer program product (CPP), the CPP comprising a non-transitory computer readable medium storing computer program code which when run in a receiving node causes the receiving node to: obtain a group of hangover frames transmitted by an encoder; obtain a first silence insertion descriptor (SID) frame transmitted by the encoder, wherein the SID frame comprises a counter value equal to the number of hangover frames included in the group of hangover frames; and generate comfort noise based on the counter value. 9. The CPP of claim 8, wherein the first SID frame further comprises SID parameters. 10. The CPP of claim 8, wherein the number of hangover frames included in the group of hangover frames is dynamically variable based on properties of an input audio signal.
2,600
10,894
10,894
15,293,811
2,622
The present invention discloses a method and an apparatus for displaying an operation interface, and a touchscreen terminal, where the method includes: receiving a touch operation that is input by a user in a first set area on a touchscreen; and when it is detected that the touch operation ends in a second set area, displaying, on the touchscreen, a second operation interface corresponding to the second set area. According to this solution, a needed operation interface can be obtained by means of one touch operation, and complex operation steps do not need to be performed to switch between operation interfaces.
1. A method for displaying an operation interface, comprising: receiving a touch operation that is input by a user in a first set area on a touchscreen; and when it is detected that the touch operation ends in a second set area, displaying, on the touchscreen, a second operation interface corresponding to the second set area. 2. The method according to claim 1, wherein displaying, on the touchscreen, the second operation interface corresponding to the second set area comprises: determining a position relationship between the second set area and the first set area; when the second set area is located in a first set position of the first set area, using an operation interface located before a first operation interface in a preset operation interface sequence as the second operation interface; when the second set area is located in a second set position of the first set area, using an operation interface located after the first operation interface in the preset operation interface sequence as the second operation interface; and displaying the second operation interface on the touchscreen. 3. The method according to claim 1, further comprising: when it is detected that the touch operation ends in the first set area, displaying, on the touchscreen, the first operation interface corresponding to the first set area. 4. The method according to claim 3, wherein after receiving the touch operation that is input by a user in the first set area on the touchscreen, before displaying, on the touchscreen, the second operation interface corresponding to the second set area, the method further comprises: displaying a part of the first operation interface, and displaying a prompt for switching between operation interfaces. 5. An apparatus for displaying an operation interface, comprising: a receiving unit, configured to receive a touch operation that is input by a user in a first set area on a touchscreen; a detection unit, configured to detect whether the touch operation ends in a second set area; and a first display unit, configured to: when the detection unit detects that the touch operation ends in the second set area, display, on the touchscreen, a second operation interface corresponding to the second set area. 6. The apparatus according to claim 5, wherein the first display unit comprises: a determining subunit, configured to determine a position relationship between the second set area and the first set area; a decision subunit, configured to: when the determining subunit determines that the second set area is located in a first set position of the first set area, use an operation interface located before the first operation interface in a preset operation interface sequence as the second operation interface; when the determining subunit determines that the second set area is located in a second set position of the first set area, use an operation interface located after the first operation interface in the preset operation interface sequence as the second operation interface; and a display subunit, configured to display the second operation interface on the touchscreen. 7. The apparatus according to claim 5, further comprising a second display unit, configured to: when the detection unit detects that the touch operation ends in the first set area, display, on the touchscreen, the first operation interface corresponding to the first set area. 8. The apparatus according to claim 7, further comprising a third display unit, configured to: after the receiving unit receives the touch operation that is input by the user in the first set area on the touchscreen, before the first display unit displays, on the touchscreen, the second operation interface corresponding to the second set area, display a part of the first operation interface, and display a prompt for switching between operation interfaces. 9. A touchscreen terminal, comprising: a transceiver, configured to receive a touch operation that is input by a user in a first set area on a touchscreen; a processor, configured to detect whether the touch operation ends in a second set area; and a first display, configured to: when the processor detects that the touch operation ends in the second set area, display, on the touchscreen, a second operation interface corresponding to the second set area. 10. The touchscreen terminal according to claim 9, wherein the first display comprises: a determining subunit, configured to determine a position relationship between the second set area and the first set area; a decision subunit, configured to: when the determining subunit determines that the second set area is located in a first set position of the first set area, use an operation interface located before the first operation interface in a preset operation interface sequence as the second operation interface; when the determining subunit determines that the second set area is located in a second set position of the first set area, use an operation interface located after the first operation interface in the preset operation interface sequence as the second operation interface; and a display subunit, configured to display the second operation interface on the touchscreen. 11. The touchscreen terminal according to claim 9, further comprising a second display, configured to: when the processor detects that the touch operation ends in the first set area, display, on the touchscreen, the first operation interface corresponding to the first set area. 12. The touchscreen terminal according to claim 11, further comprising a third display, configured to: after the transceiver receives the touch operation that is input by the user in the first set area on the touchscreen, before the first display displays, on the touchscreen, the second operation interface corresponding to the second set area, display a part of the first operation interface, and display a prompt for switching between operation interfaces.
The present invention discloses a method and an apparatus for displaying an operation interface, and a touchscreen terminal, where the method includes: receiving a touch operation that is input by a user in a first set area on a touchscreen; and when it is detected that the touch operation ends in a second set area, displaying, on the touchscreen, a second operation interface corresponding to the second set area. According to this solution, a needed operation interface can be obtained by means of one touch operation, and complex operation steps do not need to be performed to switch between operation interfaces.1. A method for displaying an operation interface, comprising: receiving a touch operation that is input by a user in a first set area on a touchscreen; and when it is detected that the touch operation ends in a second set area, displaying, on the touchscreen, a second operation interface corresponding to the second set area. 2. The method according to claim 1, wherein displaying, on the touchscreen, the second operation interface corresponding to the second set area comprises: determining a position relationship between the second set area and the first set area; when the second set area is located in a first set position of the first set area, using an operation interface located before a first operation interface in a preset operation interface sequence as the second operation interface; when the second set area is located in a second set position of the first set area, using an operation interface located after the first operation interface in the preset operation interface sequence as the second operation interface; and displaying the second operation interface on the touchscreen. 3. The method according to claim 1, further comprising: when it is detected that the touch operation ends in the first set area, displaying, on the touchscreen, the first operation interface corresponding to the first set area. 4. The method according to claim 3, wherein after receiving the touch operation that is input by a user in the first set area on the touchscreen, before displaying, on the touchscreen, the second operation interface corresponding to the second set area, the method further comprises: displaying a part of the first operation interface, and displaying a prompt for switching between operation interfaces. 5. An apparatus for displaying an operation interface, comprising: a receiving unit, configured to receive a touch operation that is input by a user in a first set area on a touchscreen; a detection unit, configured to detect whether the touch operation ends in a second set area; and a first display unit, configured to: when the detection unit detects that the touch operation ends in the second set area, display, on the touchscreen, a second operation interface corresponding to the second set area. 6. The apparatus according to claim 5, wherein the first display unit comprises: a determining subunit, configured to determine a position relationship between the second set area and the first set area; a decision subunit, configured to: when the determining subunit determines that the second set area is located in a first set position of the first set area, use an operation interface located before the first operation interface in a preset operation interface sequence as the second operation interface; when the determining subunit determines that the second set area is located in a second set position of the first set area, use an operation interface located after the first operation interface in the preset operation interface sequence as the second operation interface; and a display subunit, configured to display the second operation interface on the touchscreen. 7. The apparatus according to claim 5, further comprising a second display unit, configured to: when the detection unit detects that the touch operation ends in the first set area, display, on the touchscreen, the first operation interface corresponding to the first set area. 8. The apparatus according to claim 7, further comprising a third display unit, configured to: after the receiving unit receives the touch operation that is input by the user in the first set area on the touchscreen, before the first display unit displays, on the touchscreen, the second operation interface corresponding to the second set area, display a part of the first operation interface, and display a prompt for switching between operation interfaces. 9. A touchscreen terminal, comprising: a transceiver, configured to receive a touch operation that is input by a user in a first set area on a touchscreen; a processor, configured to detect whether the touch operation ends in a second set area; and a first display, configured to: when the processor detects that the touch operation ends in the second set area, display, on the touchscreen, a second operation interface corresponding to the second set area. 10. The touchscreen terminal according to claim 9, wherein the first display comprises: a determining subunit, configured to determine a position relationship between the second set area and the first set area; a decision subunit, configured to: when the determining subunit determines that the second set area is located in a first set position of the first set area, use an operation interface located before the first operation interface in a preset operation interface sequence as the second operation interface; when the determining subunit determines that the second set area is located in a second set position of the first set area, use an operation interface located after the first operation interface in the preset operation interface sequence as the second operation interface; and a display subunit, configured to display the second operation interface on the touchscreen. 11. The touchscreen terminal according to claim 9, further comprising a second display, configured to: when the processor detects that the touch operation ends in the first set area, display, on the touchscreen, the first operation interface corresponding to the first set area. 12. The touchscreen terminal according to claim 11, further comprising a third display, configured to: after the transceiver receives the touch operation that is input by the user in the first set area on the touchscreen, before the first display displays, on the touchscreen, the second operation interface corresponding to the second set area, display a part of the first operation interface, and display a prompt for switching between operation interfaces.
2,600
10,895
10,895
16,266,994
2,631
Systems and methods for providing Chronos Channel interconnects in an ASIC are provided. Chronos Channels rely on a reduced set of timing assumptions and are robust against delay variations. Chronos Channels transmit data using delay insensitive (DI) codes and quasi-delay-insensitive (QDI) logic. Chronos Channels are insensitive to all wire and gate delay variations, but for those belonging to a few specific forking logic paths called isochronic forks. Chronos Channels use temporal compression in internal paths to reduce the overheads of QDI logic and efficiently transmit data. Chronos Channels are defined by a combination of a DI code, a temporal compression ratio and hardware.
1. A point-to-point connection between a first intellectual property (IP) block and a second IP block of an application specific integrated circuit (ASIC), the point-to-point connection comprising: a transmitter (TX) associated with the first IP block, the TX comprising one or more bundled data (BD) encoders and one or more temporal compressors, and configured to transform input data signals from the first IP block into one or more temporally compressed BD asynchronous signals, based in part, on a temporal compression ratio and transmit the temporally compressed BD asynchronous signals via a channel; a receiver (RX) associated with the second IP block, the RX comprising one or more BD decoders and one or more temporal decompressors, and configured to receive the one or more BD asynchronous signals via the channel and restore the one or more compressed BD asynchronous signals to form a representation of the input data signals compliant to an input data format of the second IP block; and wherein the channel is a timing independent channel between the first IP block and the second IP block, wherein the one or more BD asynchronous signals from the TX are propagated via the timing independent channel in a self-timed fashion to the RX. 2. The point-to-point connection as recited in claim 1, wherein the BD handshake is 2-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition or a falling transition of the request signal and a corresponding acknowledge signal is represented by a rising transition or a falling transition of the acknowledge signal, respectively. 3. The point-to-point connection as recited in claim 1, wherein BD handshake is 4-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition of the request signal and a corresponding acknowledge signal is represented by a rising transition of the acknowledge signal. 4. The point-to-point connection as recited in claim 1, wherein the one or more temporal compressors receive encoded data signals from the one or more BD encoders based on the input data and, to transform the input data signals into one or more temporally compressed BD asynchronous signals, serially distributes portions of the encoded data signals into a plurality of temporal slots of a cycle time based on the temporal compression ratio, wherein the cycle time limits the duration of time for the first IP block to transmit the input data signals and the second IP block to form the representation of the input signals. 5. The point-to-point connection as recited in claim 1, wherein the RX is configured to receive the one or more DI asynchronous signals from the TX via the channel, the received one or more DI asynchronous signals indicative of all the input data signals received at the TX from the first IP block and transformed by the TX. 6. The point-to-point connection as recited in claim 1, wherein the one or more BD encoders are communicatively coupled between the first IP block and the one or more temporal compressors. 7. The point-to-point connection as recited in claim 1, wherein the RX comprises one or more temporal decompressors communicatively coupled between one or more temporal compressors and the one or more BD decoders. 8. The point-to-point connection as recited in claim 1, wherein the point-to-point connection is configured to operate using one or more of analog signals and digital signals. 9. The point-to-point connection as recited in claim 1, wherein the point-to-point connection is configured to operate over one or more of a wireless data connection and a wired data connection. 10. A connection between one or more first intellectual property (IP) blocks and one or more receiving IP blocks, the connection comprising: at least one transmitter (TX) associated with the one or more first IP blocks, the at least one TX comprising one or more bundled data (BD) encoders and one or more temporal compressors, and configured to transform input data signals from the one or more first IP blocks into temporally compressed BD asynchronous signals; at least one timing independent channel between the at least one TX and a Flow Control block, wherein the BD asynchronous signal from the at least one TX is propagated via the at least one timing independent channel in a self-timed fashion to the Flow Control block; the Flow Control block comprising a bundled data Flow Control element configured to propagate the compressed BD asynchronous signals to at least one individual timing independent channels; and at least one receiver (RX) associated with the one or more receiving IP blocks, at least one RX comprising one or more BD decoders and one or more temporal decompressors; and configured to receive the BD asynchronous signals from the Flow control block and restore the compressed BD asynchronous signals received from the Flow Control block to form duplicates of the input data signals. 11. The connection as recited in claim 10, wherein the connection is a point-to-multipoint connection between a transmitting IP block and a plurality of receiving IP blocks, wherein the bundled data Flow Control element is configured to broadcast or selectively propagate the compressed BD asynchronous signals to a plurality of individual timing independent channels, and wherein the at least one RX comprises a plurality of RXs associated with the plurality of receiving IP blocks. 12. The connection as recited in claim 10, wherein the connection is a multipoint-to-point connection between a plurality of transmitting IP blocks and a receiving IP block, wherein the at least one TX comprises a plurality of TX each associated with respective transmitting IP blocks, wherein the at least one timing independent channel comprises a plurality of timing independent channels between the plurality of TXs and the individual inputs of the Flow Control block, and the BD Flow Control element is configured to merge or selectively propagate the individual timing independent channels. 13. The connection as recited in claim 10, wherein the connection is a multipoint-to-multipoint connection between a plurality of transmitting IP blocks and a plurality of receiving IP blocks, wherein: the at least one TX comprises a plurality of TXs each associated with a respective one of the transmitting IP blocks, the at least one timing independent channels comprises a plurality of timing independent channels between the plurality of TXs and individual inputs of the Flow Control block, the BD logic Flow Control element is configured to merge or selectively propagate the BD asynchronous signals to individual timing independent channels and broadcast or selectively propagate the BD asynchronous signals to the individual timing independent channels, and the at least one RX comprises multiple RXs associated with the plurality of receiving IP blocks, and configured to receive the compressed BD asynchronous signals via the individual timing independent channels and restore the compressed BD asynchronous signal to form duplicates of the input data signals. 14. The connection as recited in claim 10, wherein the BD handshake is 2-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition or a falling transition of the request signal and a corresponding acknowledge signal is represented by a rising transition or a falling transition of the acknowledge signal, respectively. 15. The connection as recited in claim 10, wherein BD handshake is 4-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition of the request signal and a corresponding acknowledge signal is represented by a rising transition of the acknowledge signal. 16. The connection as recited in claim 10, wherein the one or more temporal compressors receive encoded data signals from the one or more BD encoders based on the input data and, to transform the input data signals into one or more temporally compressed BD asynchronous signals, serially distributes portions of the encoded data signals into a plurality of temporal slots of a cycle time based on the temporal compression ratio, wherein the cycle time limits the duration of time for the one or more first IP blocks to transmit the input data signals and the one or more second IP blocks to form the representation of the input signals. 17. The connection as recited in claim 10, wherein the at least one RX is configured to receive the one or more DI asynchronous signals from the at least one TX via the channel, the received one or more DI asynchronous signals indicative of all of the input data signals received at the at least one TX from the one or more first IP blocks and transformed by the at least one TX. 18. The connection as recited in claim 10, wherein the one or more BD encoders are communicatively coupled between the one or more first IP blocks and the one or more temporal compressors. 19. The connection as recited in claim 10, wherein the at least one RX comprises one or more temporal decompressors communicatively coupled between one or more temporal compressors and the one or more BD decoders.
Systems and methods for providing Chronos Channel interconnects in an ASIC are provided. Chronos Channels rely on a reduced set of timing assumptions and are robust against delay variations. Chronos Channels transmit data using delay insensitive (DI) codes and quasi-delay-insensitive (QDI) logic. Chronos Channels are insensitive to all wire and gate delay variations, but for those belonging to a few specific forking logic paths called isochronic forks. Chronos Channels use temporal compression in internal paths to reduce the overheads of QDI logic and efficiently transmit data. Chronos Channels are defined by a combination of a DI code, a temporal compression ratio and hardware.1. A point-to-point connection between a first intellectual property (IP) block and a second IP block of an application specific integrated circuit (ASIC), the point-to-point connection comprising: a transmitter (TX) associated with the first IP block, the TX comprising one or more bundled data (BD) encoders and one or more temporal compressors, and configured to transform input data signals from the first IP block into one or more temporally compressed BD asynchronous signals, based in part, on a temporal compression ratio and transmit the temporally compressed BD asynchronous signals via a channel; a receiver (RX) associated with the second IP block, the RX comprising one or more BD decoders and one or more temporal decompressors, and configured to receive the one or more BD asynchronous signals via the channel and restore the one or more compressed BD asynchronous signals to form a representation of the input data signals compliant to an input data format of the second IP block; and wherein the channel is a timing independent channel between the first IP block and the second IP block, wherein the one or more BD asynchronous signals from the TX are propagated via the timing independent channel in a self-timed fashion to the RX. 2. The point-to-point connection as recited in claim 1, wherein the BD handshake is 2-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition or a falling transition of the request signal and a corresponding acknowledge signal is represented by a rising transition or a falling transition of the acknowledge signal, respectively. 3. The point-to-point connection as recited in claim 1, wherein BD handshake is 4-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition of the request signal and a corresponding acknowledge signal is represented by a rising transition of the acknowledge signal. 4. The point-to-point connection as recited in claim 1, wherein the one or more temporal compressors receive encoded data signals from the one or more BD encoders based on the input data and, to transform the input data signals into one or more temporally compressed BD asynchronous signals, serially distributes portions of the encoded data signals into a plurality of temporal slots of a cycle time based on the temporal compression ratio, wherein the cycle time limits the duration of time for the first IP block to transmit the input data signals and the second IP block to form the representation of the input signals. 5. The point-to-point connection as recited in claim 1, wherein the RX is configured to receive the one or more DI asynchronous signals from the TX via the channel, the received one or more DI asynchronous signals indicative of all the input data signals received at the TX from the first IP block and transformed by the TX. 6. The point-to-point connection as recited in claim 1, wherein the one or more BD encoders are communicatively coupled between the first IP block and the one or more temporal compressors. 7. The point-to-point connection as recited in claim 1, wherein the RX comprises one or more temporal decompressors communicatively coupled between one or more temporal compressors and the one or more BD decoders. 8. The point-to-point connection as recited in claim 1, wherein the point-to-point connection is configured to operate using one or more of analog signals and digital signals. 9. The point-to-point connection as recited in claim 1, wherein the point-to-point connection is configured to operate over one or more of a wireless data connection and a wired data connection. 10. A connection between one or more first intellectual property (IP) blocks and one or more receiving IP blocks, the connection comprising: at least one transmitter (TX) associated with the one or more first IP blocks, the at least one TX comprising one or more bundled data (BD) encoders and one or more temporal compressors, and configured to transform input data signals from the one or more first IP blocks into temporally compressed BD asynchronous signals; at least one timing independent channel between the at least one TX and a Flow Control block, wherein the BD asynchronous signal from the at least one TX is propagated via the at least one timing independent channel in a self-timed fashion to the Flow Control block; the Flow Control block comprising a bundled data Flow Control element configured to propagate the compressed BD asynchronous signals to at least one individual timing independent channels; and at least one receiver (RX) associated with the one or more receiving IP blocks, at least one RX comprising one or more BD decoders and one or more temporal decompressors; and configured to receive the BD asynchronous signals from the Flow control block and restore the compressed BD asynchronous signals received from the Flow Control block to form duplicates of the input data signals. 11. The connection as recited in claim 10, wherein the connection is a point-to-multipoint connection between a transmitting IP block and a plurality of receiving IP blocks, wherein the bundled data Flow Control element is configured to broadcast or selectively propagate the compressed BD asynchronous signals to a plurality of individual timing independent channels, and wherein the at least one RX comprises a plurality of RXs associated with the plurality of receiving IP blocks. 12. The connection as recited in claim 10, wherein the connection is a multipoint-to-point connection between a plurality of transmitting IP blocks and a receiving IP block, wherein the at least one TX comprises a plurality of TX each associated with respective transmitting IP blocks, wherein the at least one timing independent channel comprises a plurality of timing independent channels between the plurality of TXs and the individual inputs of the Flow Control block, and the BD Flow Control element is configured to merge or selectively propagate the individual timing independent channels. 13. The connection as recited in claim 10, wherein the connection is a multipoint-to-multipoint connection between a plurality of transmitting IP blocks and a plurality of receiving IP blocks, wherein: the at least one TX comprises a plurality of TXs each associated with a respective one of the transmitting IP blocks, the at least one timing independent channels comprises a plurality of timing independent channels between the plurality of TXs and individual inputs of the Flow Control block, the BD logic Flow Control element is configured to merge or selectively propagate the BD asynchronous signals to individual timing independent channels and broadcast or selectively propagate the BD asynchronous signals to the individual timing independent channels, and the at least one RX comprises multiple RXs associated with the plurality of receiving IP blocks, and configured to receive the compressed BD asynchronous signals via the individual timing independent channels and restore the compressed BD asynchronous signal to form duplicates of the input data signals. 14. The connection as recited in claim 10, wherein the BD handshake is 2-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition or a falling transition of the request signal and a corresponding acknowledge signal is represented by a rising transition or a falling transition of the acknowledge signal, respectively. 15. The connection as recited in claim 10, wherein BD handshake is 4-phases comprising a request signal and an acknowledge signal, wherein a request for data is represented by a rising transition of the request signal and a corresponding acknowledge signal is represented by a rising transition of the acknowledge signal. 16. The connection as recited in claim 10, wherein the one or more temporal compressors receive encoded data signals from the one or more BD encoders based on the input data and, to transform the input data signals into one or more temporally compressed BD asynchronous signals, serially distributes portions of the encoded data signals into a plurality of temporal slots of a cycle time based on the temporal compression ratio, wherein the cycle time limits the duration of time for the one or more first IP blocks to transmit the input data signals and the one or more second IP blocks to form the representation of the input signals. 17. The connection as recited in claim 10, wherein the at least one RX is configured to receive the one or more DI asynchronous signals from the at least one TX via the channel, the received one or more DI asynchronous signals indicative of all of the input data signals received at the at least one TX from the one or more first IP blocks and transformed by the at least one TX. 18. The connection as recited in claim 10, wherein the one or more BD encoders are communicatively coupled between the one or more first IP blocks and the one or more temporal compressors. 19. The connection as recited in claim 10, wherein the at least one RX comprises one or more temporal decompressors communicatively coupled between one or more temporal compressors and the one or more BD decoders.
2,600
10,896
10,896
16,838,037
2,664
Methods and systems for learnable defect detection for semiconductor applications are provided. One system includes a deep metric learning defect detection model configured for projecting a test image for a specimen and a corresponding reference image into latent space, determining a distance in the latent space between one or more different portions of the test image and corresponding portion(s) of the corresponding reference image, and detecting defects in the one or more different portions of the test image based on the determined distances. Another system includes a learnable low-rank reference image generator configured for removing noise from one or more test images for a specimen thereby generating one or more reference images corresponding to the one or more test images.
1. A system configured to detect defects on a specimen, comprising: one or more computer systems; and one or more components executed by the one or more computer systems, wherein the one or more components comprise a deep metric learning defect detection model configured for: projecting a test image generated for a specimen and a corresponding reference image into latent space; for one or more different portions of the test image, determining a distance in the latent space between the one or more different portions and corresponding one or more portions of the corresponding reference image; and detecting defects in the one or more different portions of the test image based on the distances determined for the one or more different portions of the test image, respectively. 2. The system of claim 1, wherein the test image and the corresponding reference image are for corresponding locations in different dies on the specimen. 3. The system of claim 1, wherein the test image and the corresponding reference image are for corresponding locations in different cells on the specimen. 4. The system of claim 1, wherein the test image and the corresponding reference image are generated for the specimen without using design data for the specimen. 5. The system of claim 1, wherein the test image is generated for the specimen by an imaging system that directs energy to and detects energy from the specimen, and wherein the corresponding reference image is generated without using the specimen. 6. The system of claim 5, wherein the corresponding reference image is acquired from a database containing design data for the specimen. 7. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model, and wherein the deep metric learning defect detection model is further configured for performing said detecting using the design data. 8. The system of claim 1, wherein said detecting is performed with one or more parameters determined from care areas for the specimen. 9. The system of claim 8, wherein the one or more computer systems are configured for inputting information for the care areas into the deep metric learning defect detection model. 10. The system of claim 1, wherein said detecting is performed without information for care areas for the specimen. 11. The system of claim 1, wherein the test image is generated in a logic area of the specimen. 12. The system of claim 1, wherein the test image is generated in an array area of the specimen. 13. The system of claim 1, wherein the different portions of the test mage comprise different pixels in the test image. 14. The system of claim 1, wherein the deep metric learning defect detection model is further configured for projecting an additional corresponding reference image into the latent space and determining an average of the corresponding reference image and the additional corresponding reference image and a reference region in the latent space, and wherein the one or more portions of the corresponding reference image used for determining the distance comprise the reference region. 15. The system of claim 1, wherein the corresponding reference image comprises a non-defective test image for the specimen, wherein projecting the corresponding reference image comprises learning a reference region in the latent space, and wherein the one or more portions of the corresponding reference image used for determining the distance comprise the reference region. 16. The system of claim 1, wherein the deep metric learning defect detection model has a Siamese network architecture. 17. The system of claim 1, wherein the deep metric learning defect detection model has a triplet network architecture. 18. The system of claim 1, wherein the deep metric learning defect detection model has a quadruplet network architecture. 19. The system of claim 1, wherein the deep metric learning defect detection model comprises one or more deep learning convolution filters, and wherein the one or more computer systems are configured for determining a configuration of the one or more deep learning convolution filters based on physics involved in generating the test image. 20. The system of claim 1, wherein the deep metric learning defect detection model comprises one or more deep learning convolution filters, and wherein the one or more computer systems are configured for determining a configuration of the one or more deep learning convolution filters based on imaging hardware used for generating the test image. 21. The system of claim 20, wherein determining the configuration comprises determining one or more parameters of the one or more deep learning convolution filters based on a point spread function of the imaging hardware. 22. The system of claim 21, wherein the one or more parameters of the one or more deep learning convolution filters comprise one or more of filter size, filter symmetry, and filter depth. 23. The system of claim 21, wherein determining the one or more parameters of the one or more deep learning convolution filters comprises learning the one or more parameters by optimizing a loss function. 24. The system of claim 20, wherein determining the configuration comprises selecting the one or more deep learning convolution filters from a predetermined set of deep learning convolution filters based on a point spread function of the imaging hardware. 25. The system of claim 24, wherein one or more parameters of the one or more deep learning convolution filters in the predetermined set are fixed. 26. The system of claim 24, wherein determining the configuration further comprises fine tuning one or more initial parameters of the one or more deep learning convolution filters by optimizing a loss function. 27. The system of claim 1, wherein the one or more components further comprise a learnable low-rank reference image generator configured for generating the corresponding reference image, wherein the one or more computer systems are configured for inputting one or more test images generated for the specimen into the learnable low-rank reference image generator, wherein the one or more test images are generated for different locations on the specimen corresponding to the same location in a design for the specimen, and wherein the learnable low-rank reference image generator is further configured for removing noise from the one or more test images thereby generating the corresponding reference image. 28. The system of claim 1, wherein the test image and an additional test image are generated for the specimen with different modes of an imaging system, respectively; wherein the deep metric learning defect detection model is further configured for projecting the test image and the corresponding reference image into a first latent space, projecting the additional test image and an additional corresponding reference image into a second latent space, and combining the first and second latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 29. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model; wherein the test image and an additional test image are generated for the specimen with different modes of an imaging system, respectively; wherein the deep metric learning defect detection model is further configured for projecting the test image and the corresponding reference image into a first latent space, projecting the additional test image and an additional corresponding reference image into a second latent space, projecting the design data into a third latent space, and combining the first, second, and third latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 30. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model; wherein the deep metric learning defect detection model is further configured for projecting the test image and the corresponding reference image into a first latent space, projecting the design data into a second latent space, and combining the first and second latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 31. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model; wherein the test image and an additional test image are generated for the specimen with different modes of an imaging system, respectively; wherein the deep metric learning defect detection model is further configured for projecting a first set comprising one or more of the test image and the corresponding reference image, the additional test image and an additional corresponding reference image, and the design data into a first latent space, projecting a second set comprising one or more of the test image and the corresponding reference image, the additional test image and the additional corresponding reference image, and the design data into a second latent space, and combining the first and second latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 32. The system of claim 1, wherein the one or more computer systems are configured for training the deep metric learning defect detection model with one or more training images and pixel-level ground truth information for the one or more training images. 33. The system of claim 32, wherein the one or more training images and pixel-level ground truth information are generated from a process window qualification wafer. 34. The system of claim 1, wherein the one or more computer systems are configured for performing active learning for training the deep metric learning defect detection model. 35. The system of claim 1, wherein the specimen is a wafer. 36. The system of claim 1, wherein the specimen is a reticle. 37. A system configured to generate a reference image for a specimen, comprising: one or more computer systems; and one or more components executed by the one or more computer systems, wherein the one or more components comprise a learnable low-rank reference image generator, wherein the one or more computer systems are configured for inputting one or more test images for a specimen into the learnable low-rank reference image generator, wherein the one or more test images are generated for different locations on the specimen corresponding to the same location in a design for the specimen, and wherein the learnable low-rank reference image generator is configured for removing noise from the one or more test images thereby generating one or more reference images corresponding to the one or more test images; and wherein a defect detection component detects defects on the specimen based on the one or more test images and their corresponding one or more reference images. 38.-65. (canceled)
Methods and systems for learnable defect detection for semiconductor applications are provided. One system includes a deep metric learning defect detection model configured for projecting a test image for a specimen and a corresponding reference image into latent space, determining a distance in the latent space between one or more different portions of the test image and corresponding portion(s) of the corresponding reference image, and detecting defects in the one or more different portions of the test image based on the determined distances. Another system includes a learnable low-rank reference image generator configured for removing noise from one or more test images for a specimen thereby generating one or more reference images corresponding to the one or more test images.1. A system configured to detect defects on a specimen, comprising: one or more computer systems; and one or more components executed by the one or more computer systems, wherein the one or more components comprise a deep metric learning defect detection model configured for: projecting a test image generated for a specimen and a corresponding reference image into latent space; for one or more different portions of the test image, determining a distance in the latent space between the one or more different portions and corresponding one or more portions of the corresponding reference image; and detecting defects in the one or more different portions of the test image based on the distances determined for the one or more different portions of the test image, respectively. 2. The system of claim 1, wherein the test image and the corresponding reference image are for corresponding locations in different dies on the specimen. 3. The system of claim 1, wherein the test image and the corresponding reference image are for corresponding locations in different cells on the specimen. 4. The system of claim 1, wherein the test image and the corresponding reference image are generated for the specimen without using design data for the specimen. 5. The system of claim 1, wherein the test image is generated for the specimen by an imaging system that directs energy to and detects energy from the specimen, and wherein the corresponding reference image is generated without using the specimen. 6. The system of claim 5, wherein the corresponding reference image is acquired from a database containing design data for the specimen. 7. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model, and wherein the deep metric learning defect detection model is further configured for performing said detecting using the design data. 8. The system of claim 1, wherein said detecting is performed with one or more parameters determined from care areas for the specimen. 9. The system of claim 8, wherein the one or more computer systems are configured for inputting information for the care areas into the deep metric learning defect detection model. 10. The system of claim 1, wherein said detecting is performed without information for care areas for the specimen. 11. The system of claim 1, wherein the test image is generated in a logic area of the specimen. 12. The system of claim 1, wherein the test image is generated in an array area of the specimen. 13. The system of claim 1, wherein the different portions of the test mage comprise different pixels in the test image. 14. The system of claim 1, wherein the deep metric learning defect detection model is further configured for projecting an additional corresponding reference image into the latent space and determining an average of the corresponding reference image and the additional corresponding reference image and a reference region in the latent space, and wherein the one or more portions of the corresponding reference image used for determining the distance comprise the reference region. 15. The system of claim 1, wherein the corresponding reference image comprises a non-defective test image for the specimen, wherein projecting the corresponding reference image comprises learning a reference region in the latent space, and wherein the one or more portions of the corresponding reference image used for determining the distance comprise the reference region. 16. The system of claim 1, wherein the deep metric learning defect detection model has a Siamese network architecture. 17. The system of claim 1, wherein the deep metric learning defect detection model has a triplet network architecture. 18. The system of claim 1, wherein the deep metric learning defect detection model has a quadruplet network architecture. 19. The system of claim 1, wherein the deep metric learning defect detection model comprises one or more deep learning convolution filters, and wherein the one or more computer systems are configured for determining a configuration of the one or more deep learning convolution filters based on physics involved in generating the test image. 20. The system of claim 1, wherein the deep metric learning defect detection model comprises one or more deep learning convolution filters, and wherein the one or more computer systems are configured for determining a configuration of the one or more deep learning convolution filters based on imaging hardware used for generating the test image. 21. The system of claim 20, wherein determining the configuration comprises determining one or more parameters of the one or more deep learning convolution filters based on a point spread function of the imaging hardware. 22. The system of claim 21, wherein the one or more parameters of the one or more deep learning convolution filters comprise one or more of filter size, filter symmetry, and filter depth. 23. The system of claim 21, wherein determining the one or more parameters of the one or more deep learning convolution filters comprises learning the one or more parameters by optimizing a loss function. 24. The system of claim 20, wherein determining the configuration comprises selecting the one or more deep learning convolution filters from a predetermined set of deep learning convolution filters based on a point spread function of the imaging hardware. 25. The system of claim 24, wherein one or more parameters of the one or more deep learning convolution filters in the predetermined set are fixed. 26. The system of claim 24, wherein determining the configuration further comprises fine tuning one or more initial parameters of the one or more deep learning convolution filters by optimizing a loss function. 27. The system of claim 1, wherein the one or more components further comprise a learnable low-rank reference image generator configured for generating the corresponding reference image, wherein the one or more computer systems are configured for inputting one or more test images generated for the specimen into the learnable low-rank reference image generator, wherein the one or more test images are generated for different locations on the specimen corresponding to the same location in a design for the specimen, and wherein the learnable low-rank reference image generator is further configured for removing noise from the one or more test images thereby generating the corresponding reference image. 28. The system of claim 1, wherein the test image and an additional test image are generated for the specimen with different modes of an imaging system, respectively; wherein the deep metric learning defect detection model is further configured for projecting the test image and the corresponding reference image into a first latent space, projecting the additional test image and an additional corresponding reference image into a second latent space, and combining the first and second latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 29. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model; wherein the test image and an additional test image are generated for the specimen with different modes of an imaging system, respectively; wherein the deep metric learning defect detection model is further configured for projecting the test image and the corresponding reference image into a first latent space, projecting the additional test image and an additional corresponding reference image into a second latent space, projecting the design data into a third latent space, and combining the first, second, and third latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 30. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model; wherein the deep metric learning defect detection model is further configured for projecting the test image and the corresponding reference image into a first latent space, projecting the design data into a second latent space, and combining the first and second latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 31. The system of claim 1, wherein the one or more computer systems are configured for inputting design data for the specimen into the deep metric learning defect detection model; wherein the test image and an additional test image are generated for the specimen with different modes of an imaging system, respectively; wherein the deep metric learning defect detection model is further configured for projecting a first set comprising one or more of the test image and the corresponding reference image, the additional test image and an additional corresponding reference image, and the design data into a first latent space, projecting a second set comprising one or more of the test image and the corresponding reference image, the additional test image and the additional corresponding reference image, and the design data into a second latent space, and combining the first and second latent spaces into a joint latent space; and wherein the latent space used for determining the distance is the joint latent space. 32. The system of claim 1, wherein the one or more computer systems are configured for training the deep metric learning defect detection model with one or more training images and pixel-level ground truth information for the one or more training images. 33. The system of claim 32, wherein the one or more training images and pixel-level ground truth information are generated from a process window qualification wafer. 34. The system of claim 1, wherein the one or more computer systems are configured for performing active learning for training the deep metric learning defect detection model. 35. The system of claim 1, wherein the specimen is a wafer. 36. The system of claim 1, wherein the specimen is a reticle. 37. A system configured to generate a reference image for a specimen, comprising: one or more computer systems; and one or more components executed by the one or more computer systems, wherein the one or more components comprise a learnable low-rank reference image generator, wherein the one or more computer systems are configured for inputting one or more test images for a specimen into the learnable low-rank reference image generator, wherein the one or more test images are generated for different locations on the specimen corresponding to the same location in a design for the specimen, and wherein the learnable low-rank reference image generator is configured for removing noise from the one or more test images thereby generating one or more reference images corresponding to the one or more test images; and wherein a defect detection component detects defects on the specimen based on the one or more test images and their corresponding one or more reference images. 38.-65. (canceled)
2,600
10,897
10,897
15,845,790
2,649
A transceiver for a vehicle for communication in a mobile radio system includes one or more interfaces for a plurality of antennas, and a transceiving device configured to communicate via the one or more interfaces and via at least a part of the plurality of antennas in the mobile radio system. The transceiver also includes a control device configured to control the transceiving device and the one or more interfaces, where the control device determines, via a first cluster of the plurality of antennas, information about a radio channel between the first cluster of antennas and a base station of the mobile radio system, and communicates via a second cluster of the plurality of antennas with the base station of the mobile radio system.
1. A transceiver for a vehicle for communication in a mobile radio system, comprising: one or more interfaces for a plurality of antennas; a transceiving device configured to communicate via the one or more interfaces and via at least a part of the plurality of antennas in the mobile radio system; and a control device configured to control the transceiving device and the one or more interfaces, wherein the control device is configured to determine, via a first cluster of the plurality of antennas, information about a radio channel between the first cluster of antennas and a base station of the mobile radio system and to communicate via a second cluster of the plurality of antennas with the base station of the mobile radio system. 2. The transceiver as claimed in claim 1, wherein the plurality of antennas comprises antennas having at least one of different orientations, different polarizations, different mounting locations on the vehicle, different antenna gains and different radiation characteristics, wherein the plurality of antennas corresponds to an antenna system having at least one of decentralized and distributed antennas. 3. The transceiver as claimed in claim 1, wherein the information about the radio channel comprises information about at least one direction of incidence of radio signals. 4. The transceiver as claimed in claim 1, wherein the control device is configured to determine, using the information about the radio channel between the first cluster of the antennas and the base station of the mobile radio system, information about a radio channel between the second cluster of the antennas and the base station of the mobile radio system. 5. The transceiver as claimed in claim 4, wherein the control device is configured to determine the information about the radio channel between the second cluster of the antennas and the base station of the mobile radio system based on at least one of a speed, availability information about the mobile radio system, and a direction of travel of the vehicle. 6. The transceiver as claimed in claim 5, wherein the control device is configured to determine, via the one or more interfaces, information about the at least one of the speed, availability information about the mobile radio system, and the direction of travel of the vehicle. 7. The transceiver as claimed in claim 4, wherein the control device is configured to determine the information about the radio channel between the second cluster of the antennas and the base station of the mobile radio system based on at least one of (i) an assumption that at least one antenna of the first cluster of antennas is arranged in a direction of travel of the vehicle in front of at least one antenna of the second cluster, and (ii) an assumption that an antenna of the second cluster of antennas experiences the same radio channel as an antenna of the first cluster delayed in time. 8. The transceiver as claimed in claim 1, wherein the control device is configured to adaptively match a selection of antennas from the plurality of antennas for the first and the second cluster of antennas. 9. The transceiver as claimed in claim 7, wherein the control device is configured to select the first cluster of antennas based on a direction of travel of the vehicle such that at least one antenna of the first cluster is arranged in front of at least one antenna of the second cluster in the direction of travel of the vehicle. 10. The transceiver as claimed in claim 1, wherein the control device is configured to adaptively match a number of antennas in at least one of the first cluster and in the second cluster of antennas. 11. The transceiver as claimed in claim 1, wherein the transceiving device comprises two or more transceiving modules coupled to the plurality of antennas. 12. The transceiver as claimed in claim 1, wherein the control device is configured to perform beam forming with respect to the base station of the mobile radio system via the antennas in the second cluster. 13. The transceiver as claimed in claim 12, wherein the control device is configured to adaptively match a beam forming via the antennas in the second cluster. 14. A vehicle having a transceiver, wherein the transceiver comprises: one or more interfaces for a plurality of antennas; a transceiving device configured to communicate via the one or more interfaces and via at least a part of the plurality of antennas in the mobile radio system; and a control device configured to control the transceiving device and the one or more interfaces, wherein the control device is configured to determine, via a first cluster of the plurality of antennas, information about a radio channel between the first cluster of antennas and a base station of the mobile radio system and to communicate via a second cluster of the plurality of antennas with the base station of the mobile radio system. 15. The vehicle as claimed in claim 14 further comprising the plurality of antennas, wherein the first cluster of antennas comprises the same number of antennas as the second cluster of antennas. 16. The vehicle as claimed in claim 15, wherein the antennas of the first cluster have the same geometry with respect to one another as the antennas of the second cluster. 17. The vehicle as claimed in claim 15, wherein the antennas of the first cluster have the same antenna characteristic as the antennas of the second cluster. 18. The vehicle as claimed in claim 15, wherein the first cluster comprises other antennas than the second cluster. 19. A method for a transceiver for a vehicle for communication in a mobile radio system, comprising the acts of: determining information about a radio channel between a first cluster of antennas of a plurality of antennas and a base station of the mobile radio system; and communicating with the base station of the mobile radio system via a second cluster of the plurality of antennas.
A transceiver for a vehicle for communication in a mobile radio system includes one or more interfaces for a plurality of antennas, and a transceiving device configured to communicate via the one or more interfaces and via at least a part of the plurality of antennas in the mobile radio system. The transceiver also includes a control device configured to control the transceiving device and the one or more interfaces, where the control device determines, via a first cluster of the plurality of antennas, information about a radio channel between the first cluster of antennas and a base station of the mobile radio system, and communicates via a second cluster of the plurality of antennas with the base station of the mobile radio system.1. A transceiver for a vehicle for communication in a mobile radio system, comprising: one or more interfaces for a plurality of antennas; a transceiving device configured to communicate via the one or more interfaces and via at least a part of the plurality of antennas in the mobile radio system; and a control device configured to control the transceiving device and the one or more interfaces, wherein the control device is configured to determine, via a first cluster of the plurality of antennas, information about a radio channel between the first cluster of antennas and a base station of the mobile radio system and to communicate via a second cluster of the plurality of antennas with the base station of the mobile radio system. 2. The transceiver as claimed in claim 1, wherein the plurality of antennas comprises antennas having at least one of different orientations, different polarizations, different mounting locations on the vehicle, different antenna gains and different radiation characteristics, wherein the plurality of antennas corresponds to an antenna system having at least one of decentralized and distributed antennas. 3. The transceiver as claimed in claim 1, wherein the information about the radio channel comprises information about at least one direction of incidence of radio signals. 4. The transceiver as claimed in claim 1, wherein the control device is configured to determine, using the information about the radio channel between the first cluster of the antennas and the base station of the mobile radio system, information about a radio channel between the second cluster of the antennas and the base station of the mobile radio system. 5. The transceiver as claimed in claim 4, wherein the control device is configured to determine the information about the radio channel between the second cluster of the antennas and the base station of the mobile radio system based on at least one of a speed, availability information about the mobile radio system, and a direction of travel of the vehicle. 6. The transceiver as claimed in claim 5, wherein the control device is configured to determine, via the one or more interfaces, information about the at least one of the speed, availability information about the mobile radio system, and the direction of travel of the vehicle. 7. The transceiver as claimed in claim 4, wherein the control device is configured to determine the information about the radio channel between the second cluster of the antennas and the base station of the mobile radio system based on at least one of (i) an assumption that at least one antenna of the first cluster of antennas is arranged in a direction of travel of the vehicle in front of at least one antenna of the second cluster, and (ii) an assumption that an antenna of the second cluster of antennas experiences the same radio channel as an antenna of the first cluster delayed in time. 8. The transceiver as claimed in claim 1, wherein the control device is configured to adaptively match a selection of antennas from the plurality of antennas for the first and the second cluster of antennas. 9. The transceiver as claimed in claim 7, wherein the control device is configured to select the first cluster of antennas based on a direction of travel of the vehicle such that at least one antenna of the first cluster is arranged in front of at least one antenna of the second cluster in the direction of travel of the vehicle. 10. The transceiver as claimed in claim 1, wherein the control device is configured to adaptively match a number of antennas in at least one of the first cluster and in the second cluster of antennas. 11. The transceiver as claimed in claim 1, wherein the transceiving device comprises two or more transceiving modules coupled to the plurality of antennas. 12. The transceiver as claimed in claim 1, wherein the control device is configured to perform beam forming with respect to the base station of the mobile radio system via the antennas in the second cluster. 13. The transceiver as claimed in claim 12, wherein the control device is configured to adaptively match a beam forming via the antennas in the second cluster. 14. A vehicle having a transceiver, wherein the transceiver comprises: one or more interfaces for a plurality of antennas; a transceiving device configured to communicate via the one or more interfaces and via at least a part of the plurality of antennas in the mobile radio system; and a control device configured to control the transceiving device and the one or more interfaces, wherein the control device is configured to determine, via a first cluster of the plurality of antennas, information about a radio channel between the first cluster of antennas and a base station of the mobile radio system and to communicate via a second cluster of the plurality of antennas with the base station of the mobile radio system. 15. The vehicle as claimed in claim 14 further comprising the plurality of antennas, wherein the first cluster of antennas comprises the same number of antennas as the second cluster of antennas. 16. The vehicle as claimed in claim 15, wherein the antennas of the first cluster have the same geometry with respect to one another as the antennas of the second cluster. 17. The vehicle as claimed in claim 15, wherein the antennas of the first cluster have the same antenna characteristic as the antennas of the second cluster. 18. The vehicle as claimed in claim 15, wherein the first cluster comprises other antennas than the second cluster. 19. A method for a transceiver for a vehicle for communication in a mobile radio system, comprising the acts of: determining information about a radio channel between a first cluster of antennas of a plurality of antennas and a base station of the mobile radio system; and communicating with the base station of the mobile radio system via a second cluster of the plurality of antennas.
2,600
10,898
10,898
16,116,359
2,642
A vehicle communication management unit is provided that includes at least one configurable communication interface, at least one memory and a communication controller. Each configuration communication interface is configured to interface signals between a communication link and the vehicle communication management unit using a select communication protocol. The memory is used to store operating instructions of the communication management unit including an interface configuration table. The interface configuration table includes communication operating parameters for select communication protocols. The communication controller is used to control communication operations of the communication management unit. The communication controller is configured to determine a type of communication protocol used in a communication link coupled to the at least one configurable communication interface. The communication controller is further configured to configure the at least one configurable communication interface with communication operating parameters stored in the configuration table associated with the determined type of communication protocol.
1. A vehicle communication management unit comprising: at least one configurable communication interface, each configuration communication interface configured to interface signals between a communication link and the vehicle communication management unit using a select communication protocol; at least one memory to store operating instructions of the communication management unit including an interface configuration table, the interface configuration table including communication operating parameters for select communication protocols; and a communication controller to control communication operations of the communication management unit, the communication controller configured to determine a type of communication protocol used in a communication link coupled to the at least one configurable communication interface, the communication controller further configured to configure the at least one configurable communication interface with communication operating parameters stored in the configuration table associated with the determined type of communication protocol, wherein the same at least one configurable interface is selectively configured to communicate using different communication protocols. 2. The vehicle communication management unit of claim 1, wherein communication operating parameters of select communication protocols allow for a communication of critical safety information signals. 3. The vehicle communication management unit of claim 1, wherein the communication controller is further configured to set the at least one configurable communication interface into a master/backup relationship with another one of the at least one configurable communication interface of the communication management unit. 4. The vehicle communication management unit of claim 1, wherein the at least one configurable communication interface is a satellite data unit interface. 5. The vehicle communication management unit of claim 1, wherein the communication controller is configured to configure the at least one configurable communication interface using the communication operating parameters stored in the configuration table associated upon startup of the vehicle communication management unit. 6. The vehicle communication management unit of claim 1, further comprising: at least one protocol timer used by the communication controller for select communication protocols. 7. A communication system comprising: at least one memory to store an interface configuration table, the interface configuration table including communication operating parameters for select communication protocols; a plurality of communication links; and a communication management unit including: a plurality of configuration communication interfaces, each configuration communication interface configured to interface signals between an associated communication link of the plurality of communication links and the communication management unit using a select communication protocol, and a communication controller to control communication operations of the communication management unit, the communication controller configured to determine a type of communication protocol used in a communication link coupled to the at least one configurable communication interface, the communication controller further configured to configure each configurable communication interface with communication operating parameters stored in the configuration table associated with the determined type of communication protocol, wherein the same at least one configurable interface is selectively configured to communicate using different communication protocols. 8. The communication system of claim 7, wherein the at least one memory is within the communication management unit. 9. The communication system of claim 7, wherein the at least one memory is located external to the communication unit. 10. The communication system of claim 9, further comprising: an aircraft personality module, the at least one memory located within the aircraft personality module. 11. The communication system of claim 7, further comprising: a communication link for each configuration communication interface, each communication link including at least one transceiver and at least one antenna. 12. The communication system of claim 7, further comprising: an input/output in communication with the communication controller. 13. The communication system of claim 12, wherein the input/output includes a human machine interface. 14. The communication system of claim 7, wherein the communication controller is further configured to set at least two of the configuration communication interfaces of the plurality of configuration communication interfaces into a master/backup relationship with one another. 15. The communication system of claim 7, wherein communication controller is further configured to identify which of the plurality of configuration communication interfaces are allowed to send and receive critical information and only using those identified configuration communication interfaces to send and receive the critical information. 16. A method of operating a vehicle communication management unit with a plurality of configurable communication interfaces, the method comprising: identifying a communication type associated with a message; retrieving configuration information associated with the communication type from a configuration interface table; configuring a configurable communication interface of the plurality of configurable communication interfaces based on the retrieved configuration information, wherein the configurable interface is a generic interface that is selectively configured to communicate using different communication protocols; and interfacing communications through the configurable communication interface. 17. The method of claim 16, further comprising: assigning master/backup relationship with at least one pair of the plurality of configurable communication interfaces. 18. The method of claim 16, further comprising: determining if a message to be communicated is a critical message; determining if the configuration of the configurable communication interface allows the interface of critical messages; and using the communication interface when it is has been determined that the message is a critical message and the configuration communication interface is allowed to interface critical messages. 19. The method of claim 18, further comprising: using a different configurable communication interface of the plurality of configurable communication interfaces when the it has been determined that the configurable communication interface has not been configured to interface critical messages. 20. The method of claim 16, further comprising: configuring configurable communication interfaces in communication with associated communication links upon power up of the vehicle communication management unit.
A vehicle communication management unit is provided that includes at least one configurable communication interface, at least one memory and a communication controller. Each configuration communication interface is configured to interface signals between a communication link and the vehicle communication management unit using a select communication protocol. The memory is used to store operating instructions of the communication management unit including an interface configuration table. The interface configuration table includes communication operating parameters for select communication protocols. The communication controller is used to control communication operations of the communication management unit. The communication controller is configured to determine a type of communication protocol used in a communication link coupled to the at least one configurable communication interface. The communication controller is further configured to configure the at least one configurable communication interface with communication operating parameters stored in the configuration table associated with the determined type of communication protocol.1. A vehicle communication management unit comprising: at least one configurable communication interface, each configuration communication interface configured to interface signals between a communication link and the vehicle communication management unit using a select communication protocol; at least one memory to store operating instructions of the communication management unit including an interface configuration table, the interface configuration table including communication operating parameters for select communication protocols; and a communication controller to control communication operations of the communication management unit, the communication controller configured to determine a type of communication protocol used in a communication link coupled to the at least one configurable communication interface, the communication controller further configured to configure the at least one configurable communication interface with communication operating parameters stored in the configuration table associated with the determined type of communication protocol, wherein the same at least one configurable interface is selectively configured to communicate using different communication protocols. 2. The vehicle communication management unit of claim 1, wherein communication operating parameters of select communication protocols allow for a communication of critical safety information signals. 3. The vehicle communication management unit of claim 1, wherein the communication controller is further configured to set the at least one configurable communication interface into a master/backup relationship with another one of the at least one configurable communication interface of the communication management unit. 4. The vehicle communication management unit of claim 1, wherein the at least one configurable communication interface is a satellite data unit interface. 5. The vehicle communication management unit of claim 1, wherein the communication controller is configured to configure the at least one configurable communication interface using the communication operating parameters stored in the configuration table associated upon startup of the vehicle communication management unit. 6. The vehicle communication management unit of claim 1, further comprising: at least one protocol timer used by the communication controller for select communication protocols. 7. A communication system comprising: at least one memory to store an interface configuration table, the interface configuration table including communication operating parameters for select communication protocols; a plurality of communication links; and a communication management unit including: a plurality of configuration communication interfaces, each configuration communication interface configured to interface signals between an associated communication link of the plurality of communication links and the communication management unit using a select communication protocol, and a communication controller to control communication operations of the communication management unit, the communication controller configured to determine a type of communication protocol used in a communication link coupled to the at least one configurable communication interface, the communication controller further configured to configure each configurable communication interface with communication operating parameters stored in the configuration table associated with the determined type of communication protocol, wherein the same at least one configurable interface is selectively configured to communicate using different communication protocols. 8. The communication system of claim 7, wherein the at least one memory is within the communication management unit. 9. The communication system of claim 7, wherein the at least one memory is located external to the communication unit. 10. The communication system of claim 9, further comprising: an aircraft personality module, the at least one memory located within the aircraft personality module. 11. The communication system of claim 7, further comprising: a communication link for each configuration communication interface, each communication link including at least one transceiver and at least one antenna. 12. The communication system of claim 7, further comprising: an input/output in communication with the communication controller. 13. The communication system of claim 12, wherein the input/output includes a human machine interface. 14. The communication system of claim 7, wherein the communication controller is further configured to set at least two of the configuration communication interfaces of the plurality of configuration communication interfaces into a master/backup relationship with one another. 15. The communication system of claim 7, wherein communication controller is further configured to identify which of the plurality of configuration communication interfaces are allowed to send and receive critical information and only using those identified configuration communication interfaces to send and receive the critical information. 16. A method of operating a vehicle communication management unit with a plurality of configurable communication interfaces, the method comprising: identifying a communication type associated with a message; retrieving configuration information associated with the communication type from a configuration interface table; configuring a configurable communication interface of the plurality of configurable communication interfaces based on the retrieved configuration information, wherein the configurable interface is a generic interface that is selectively configured to communicate using different communication protocols; and interfacing communications through the configurable communication interface. 17. The method of claim 16, further comprising: assigning master/backup relationship with at least one pair of the plurality of configurable communication interfaces. 18. The method of claim 16, further comprising: determining if a message to be communicated is a critical message; determining if the configuration of the configurable communication interface allows the interface of critical messages; and using the communication interface when it is has been determined that the message is a critical message and the configuration communication interface is allowed to interface critical messages. 19. The method of claim 18, further comprising: using a different configurable communication interface of the plurality of configurable communication interfaces when the it has been determined that the configurable communication interface has not been configured to interface critical messages. 20. The method of claim 16, further comprising: configuring configurable communication interfaces in communication with associated communication links upon power up of the vehicle communication management unit.
2,600
10,899
10,899
15,948,417
2,674
A system and method for matching audio content to virtual reality visual content. The method includes analyzing received visual content and received metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the captured audio content with the received visual content; and providing the synthesized captured audio content and received visual content to a virtual reality (VR) device.
1. A method for matching audio content to virtual reality visual content, comprising: analyzing received visual content and metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the audio content with the received visual content; and providing the synthesized audio content and received visual content to a virtual reality (VR) device. 2. The method of claim 1, wherein the optimal audio source is an audio source that provides the clearest sound associated with the received visual content among all available audio sources. 3. The method of claim 1, further comprising: determining a field of view of the visual content; and updating the determined optimal audio source based on the determined field of view. 4. The method of claim 3, wherein the field of view is determined based on the received metadata. 5. The method of claim 1, wherein the received metadata includes at least one of: location pointers, time pointers, perspective indicators, a position of the VR device relative to a starting position or a predetermined baseline, a speed at which the position of the VR device changes, eye tracking parameters, gyroscope measurements, inertial measurement unit measurements, and accelerometer measurements. 6. The method of claim 1, wherein synthesizing the audio content further comprises: matching the received visual content to the audio content with minimal lag or buffering. 7. The method of claim 1, wherein the audio content and received visual content are received over a live stream. 8. The method of claim 1, wherein the audio content and received visual content are previously recorded and stored on and retrieved from a storage. 9. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: analyzing received visual content and metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the audio content with the received visual content; and providing the synthesized audio content and received visual content to a virtual reality (VR) device. 10. A system for matching audio content to virtual reality visual content, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: analyze received visual content and metadata to determine an optimal audio source associated with the received visual content; configure the optimal audio source to capture audio content; synthesize the audio content with the received visual content; and provide the synthesized audio content and received visual content to a virtual reality (VR) device. 11. The system of claim 1, wherein the optimal audio source is an audio source that provides the clearest sound associated with the received visual content among all available audio sources. 12. The system of claim 1, the system further configured to: determine a field of view of the visual content; and update the determined optimal audio source based on the determined field of view. 13. The system of claim 12, wherein the field of view is determined based on the received metadata. 14. The system of claim 1, wherein the received metadata includes at least one of: location pointers; time pointers; perspective indicators; a position of the VR device relative to a starting position or a predetermined baseline; a speed at which the position of the VR device changes; eye tracking parameters; gyroscope measurements, inertial measurement unit measurements, and accelerometer measurements. 15. The system of claim 1, wherein synthesizing the audio content further comprises: matching the received visual content to the audio content with minimal lag or buffering. 16. The system of claim 1, wherein the audio content and received visual content are received over a live stream. 17. The system of claim 1, wherein the audio content and received visual content are previously recorded and stored on and retrieved from a storage.
A system and method for matching audio content to virtual reality visual content. The method includes analyzing received visual content and received metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the captured audio content with the received visual content; and providing the synthesized captured audio content and received visual content to a virtual reality (VR) device.1. A method for matching audio content to virtual reality visual content, comprising: analyzing received visual content and metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the audio content with the received visual content; and providing the synthesized audio content and received visual content to a virtual reality (VR) device. 2. The method of claim 1, wherein the optimal audio source is an audio source that provides the clearest sound associated with the received visual content among all available audio sources. 3. The method of claim 1, further comprising: determining a field of view of the visual content; and updating the determined optimal audio source based on the determined field of view. 4. The method of claim 3, wherein the field of view is determined based on the received metadata. 5. The method of claim 1, wherein the received metadata includes at least one of: location pointers, time pointers, perspective indicators, a position of the VR device relative to a starting position or a predetermined baseline, a speed at which the position of the VR device changes, eye tracking parameters, gyroscope measurements, inertial measurement unit measurements, and accelerometer measurements. 6. The method of claim 1, wherein synthesizing the audio content further comprises: matching the received visual content to the audio content with minimal lag or buffering. 7. The method of claim 1, wherein the audio content and received visual content are received over a live stream. 8. The method of claim 1, wherein the audio content and received visual content are previously recorded and stored on and retrieved from a storage. 9. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to perform a process, the process comprising: analyzing received visual content and metadata to determine an optimal audio source associated with the received visual content; configuring the optimal audio source to capture audio content; synthesizing the audio content with the received visual content; and providing the synthesized audio content and received visual content to a virtual reality (VR) device. 10. A system for matching audio content to virtual reality visual content, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: analyze received visual content and metadata to determine an optimal audio source associated with the received visual content; configure the optimal audio source to capture audio content; synthesize the audio content with the received visual content; and provide the synthesized audio content and received visual content to a virtual reality (VR) device. 11. The system of claim 1, wherein the optimal audio source is an audio source that provides the clearest sound associated with the received visual content among all available audio sources. 12. The system of claim 1, the system further configured to: determine a field of view of the visual content; and update the determined optimal audio source based on the determined field of view. 13. The system of claim 12, wherein the field of view is determined based on the received metadata. 14. The system of claim 1, wherein the received metadata includes at least one of: location pointers; time pointers; perspective indicators; a position of the VR device relative to a starting position or a predetermined baseline; a speed at which the position of the VR device changes; eye tracking parameters; gyroscope measurements, inertial measurement unit measurements, and accelerometer measurements. 15. The system of claim 1, wherein synthesizing the audio content further comprises: matching the received visual content to the audio content with minimal lag or buffering. 16. The system of claim 1, wherein the audio content and received visual content are received over a live stream. 17. The system of claim 1, wherein the audio content and received visual content are previously recorded and stored on and retrieved from a storage.
2,600